00:00:00.001 Started by upstream project "autotest-per-patch" build number 132430 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.106 Fetching changes from the remote Git repository 00:00:00.108 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.258 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.503 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.514 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.524 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.524 > git config core.sparsecheckout # timeout=10 00:00:04.536 > git read-tree -mu HEAD # timeout=10 00:00:04.552 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.577 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.577 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.692 [Pipeline] Start of Pipeline 00:00:04.706 [Pipeline] library 00:00:04.709 Loading library shm_lib@master 00:00:04.709 Library shm_lib@master is cached. Copying from home. 00:00:04.724 [Pipeline] node 00:00:04.731 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.733 [Pipeline] { 00:00:04.745 [Pipeline] catchError 00:00:04.746 [Pipeline] { 00:00:04.757 [Pipeline] wrap 00:00:04.765 [Pipeline] { 00:00:04.773 [Pipeline] stage 00:00:04.775 [Pipeline] { (Prologue) 00:00:04.989 [Pipeline] sh 00:00:05.275 + logger -p user.info -t JENKINS-CI 00:00:05.292 [Pipeline] echo 00:00:05.293 Node: WFP6 00:00:05.299 [Pipeline] sh 00:00:05.596 [Pipeline] setCustomBuildProperty 00:00:05.606 [Pipeline] echo 00:00:05.607 Cleanup processes 00:00:05.611 [Pipeline] sh 00:00:05.893 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.893 3365912 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.906 [Pipeline] sh 00:00:06.190 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.191 ++ grep -v 'sudo pgrep' 00:00:06.191 ++ awk '{print $1}' 00:00:06.191 + sudo kill -9 00:00:06.191 + true 00:00:06.208 [Pipeline] cleanWs 00:00:06.219 [WS-CLEANUP] Deleting project workspace... 00:00:06.219 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.226 [WS-CLEANUP] done 00:00:06.230 [Pipeline] setCustomBuildProperty 00:00:06.240 [Pipeline] sh 00:00:06.518 + sudo git config --global --replace-all safe.directory '*' 00:00:06.595 [Pipeline] httpRequest 00:00:06.927 [Pipeline] echo 00:00:06.929 Sorcerer 10.211.164.20 is alive 00:00:06.937 [Pipeline] retry 00:00:06.938 [Pipeline] { 00:00:06.947 [Pipeline] httpRequest 00:00:06.951 HttpMethod: GET 00:00:06.951 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.951 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.970 Response Code: HTTP/1.1 200 OK 00:00:06.970 Success: Status code 200 is in the accepted range: 200,404 00:00:06.973 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.187 [Pipeline] } 00:00:12.206 [Pipeline] // retry 00:00:12.213 [Pipeline] sh 00:00:12.500 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.517 [Pipeline] httpRequest 00:00:12.890 [Pipeline] echo 00:00:12.892 Sorcerer 10.211.164.20 is alive 00:00:12.902 [Pipeline] retry 00:00:12.904 [Pipeline] { 00:00:12.920 [Pipeline] httpRequest 00:00:12.925 HttpMethod: GET 00:00:12.925 URL: http://10.211.164.20/packages/spdk_bd9804982b9b265de6b0a12adae1f01f6be42ea3.tar.gz 00:00:12.926 Sending request to url: http://10.211.164.20/packages/spdk_bd9804982b9b265de6b0a12adae1f01f6be42ea3.tar.gz 00:00:12.946 Response Code: HTTP/1.1 200 OK 00:00:12.946 Success: Status code 200 is in the accepted range: 200,404 00:00:12.947 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bd9804982b9b265de6b0a12adae1f01f6be42ea3.tar.gz 00:00:48.445 [Pipeline] } 00:00:48.463 [Pipeline] // retry 00:00:48.471 [Pipeline] sh 00:00:48.755 + tar --no-same-owner -xf spdk_bd9804982b9b265de6b0a12adae1f01f6be42ea3.tar.gz 00:00:51.306 [Pipeline] sh 00:00:51.596 + git -C spdk log --oneline -n5 00:00:51.596 bd9804982 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:00:51.596 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:00:51.596 aae11995f bdev/malloc: Fix unexpected DIF verification error for initial read 00:00:51.596 7bc1aace1 dif: Set DIF field to 0 explicitly if its check is disabled 00:00:51.596 ce2cd8dc9 bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:00:51.608 [Pipeline] } 00:00:51.622 [Pipeline] // stage 00:00:51.631 [Pipeline] stage 00:00:51.634 [Pipeline] { (Prepare) 00:00:51.652 [Pipeline] writeFile 00:00:51.668 [Pipeline] sh 00:00:51.952 + logger -p user.info -t JENKINS-CI 00:00:51.965 [Pipeline] sh 00:00:52.248 + logger -p user.info -t JENKINS-CI 00:00:52.261 [Pipeline] sh 00:00:52.545 + cat autorun-spdk.conf 00:00:52.545 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.545 SPDK_TEST_NVMF=1 00:00:52.545 SPDK_TEST_NVME_CLI=1 00:00:52.545 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.545 SPDK_TEST_NVMF_NICS=e810 00:00:52.545 SPDK_TEST_VFIOUSER=1 00:00:52.545 SPDK_RUN_UBSAN=1 00:00:52.545 NET_TYPE=phy 00:00:52.553 RUN_NIGHTLY=0 00:00:52.559 [Pipeline] readFile 00:00:52.585 [Pipeline] withEnv 00:00:52.587 [Pipeline] { 00:00:52.601 [Pipeline] sh 00:00:52.886 + set -ex 00:00:52.886 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:52.886 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.886 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.886 ++ SPDK_TEST_NVMF=1 00:00:52.886 ++ SPDK_TEST_NVME_CLI=1 00:00:52.886 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.886 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.886 ++ SPDK_TEST_VFIOUSER=1 00:00:52.886 ++ SPDK_RUN_UBSAN=1 00:00:52.886 ++ NET_TYPE=phy 00:00:52.886 ++ RUN_NIGHTLY=0 00:00:52.886 + case $SPDK_TEST_NVMF_NICS in 00:00:52.886 + DRIVERS=ice 00:00:52.886 + [[ tcp == \r\d\m\a ]] 00:00:52.886 + [[ -n ice ]] 00:00:52.886 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:52.886 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:52.886 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:52.886 rmmod: ERROR: Module irdma is not currently loaded 00:00:52.886 rmmod: ERROR: Module i40iw is not currently loaded 00:00:52.886 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:52.886 + true 00:00:52.886 + for D in $DRIVERS 00:00:52.886 + sudo modprobe ice 00:00:52.886 + exit 0 00:00:52.896 [Pipeline] } 00:00:52.914 [Pipeline] // withEnv 00:00:52.920 [Pipeline] } 00:00:52.935 [Pipeline] // stage 00:00:52.947 [Pipeline] catchError 00:00:52.950 [Pipeline] { 00:00:52.966 [Pipeline] timeout 00:00:52.966 Timeout set to expire in 1 hr 0 min 00:00:52.968 [Pipeline] { 00:00:52.984 [Pipeline] stage 00:00:52.986 [Pipeline] { (Tests) 00:00:53.002 [Pipeline] sh 00:00:53.317 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.317 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.317 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.317 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:53.317 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.317 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:53.317 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:53.317 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:53.317 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:53.317 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:53.317 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:53.317 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.317 + source /etc/os-release 00:00:53.317 ++ NAME='Fedora Linux' 00:00:53.317 ++ VERSION='39 (Cloud Edition)' 00:00:53.317 ++ ID=fedora 00:00:53.317 ++ VERSION_ID=39 00:00:53.317 ++ VERSION_CODENAME= 00:00:53.317 ++ PLATFORM_ID=platform:f39 00:00:53.317 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:53.317 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:53.317 ++ LOGO=fedora-logo-icon 00:00:53.317 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:53.317 ++ HOME_URL=https://fedoraproject.org/ 00:00:53.317 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:53.317 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:53.317 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:53.317 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:53.317 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:53.317 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:53.317 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:53.317 ++ SUPPORT_END=2024-11-12 00:00:53.317 ++ VARIANT='Cloud Edition' 00:00:53.317 ++ VARIANT_ID=cloud 00:00:53.317 + uname -a 00:00:53.317 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:53.317 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:55.885 Hugepages 00:00:55.885 node hugesize free / total 00:00:55.885 node0 1048576kB 0 / 0 00:00:55.885 node0 2048kB 0 / 0 00:00:55.885 node1 1048576kB 0 / 0 00:00:55.885 node1 2048kB 0 / 0 00:00:55.885 00:00:55.885 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.885 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:55.885 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:55.885 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:55.885 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:55.885 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:55.885 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:55.885 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:55.885 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:55.885 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:55.885 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:55.885 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:55.885 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:55.885 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:55.885 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:55.885 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:55.885 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:55.885 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:55.885 + rm -f /tmp/spdk-ld-path 00:00:55.885 + source autorun-spdk.conf 00:00:55.885 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.885 ++ SPDK_TEST_NVMF=1 00:00:55.885 ++ SPDK_TEST_NVME_CLI=1 00:00:55.885 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.885 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.885 ++ SPDK_TEST_VFIOUSER=1 00:00:55.885 ++ SPDK_RUN_UBSAN=1 00:00:55.885 ++ NET_TYPE=phy 00:00:55.885 ++ RUN_NIGHTLY=0 00:00:55.885 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.885 + [[ -n '' ]] 00:00:55.885 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.885 + for M in /var/spdk/build-*-manifest.txt 00:00:55.885 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:55.885 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.885 + for M in /var/spdk/build-*-manifest.txt 00:00:55.885 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.885 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.885 + for M in /var/spdk/build-*-manifest.txt 00:00:55.885 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.885 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.885 ++ uname 00:00:55.885 + [[ Linux == \L\i\n\u\x ]] 00:00:55.885 + sudo dmesg -T 00:00:55.885 + sudo dmesg --clear 00:00:55.885 + dmesg_pid=3366836 00:00:55.885 + [[ Fedora Linux == FreeBSD ]] 00:00:55.885 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.885 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.885 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:55.885 + [[ -x /usr/src/fio-static/fio ]] 00:00:55.885 + export FIO_BIN=/usr/src/fio-static/fio 00:00:55.885 + FIO_BIN=/usr/src/fio-static/fio 00:00:55.885 + sudo dmesg -Tw 00:00:55.885 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:55.885 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:55.885 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:55.885 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.885 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.885 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:55.885 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.885 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.885 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.146 18:38:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:56.146 18:38:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:56.146 18:38:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:56.146 18:38:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:56.146 18:38:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.146 18:38:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:56.146 18:38:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:56.146 18:38:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:56.146 18:38:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:56.146 18:38:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:56.146 18:38:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:56.146 18:38:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.146 18:38:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.146 18:38:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.146 18:38:18 -- paths/export.sh@5 -- $ export PATH 00:00:56.146 18:38:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.146 18:38:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:56.146 18:38:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:56.146 18:38:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732124298.XXXXXX 00:00:56.146 18:38:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732124298.xkYPLG 00:00:56.146 18:38:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:56.146 18:38:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:56.146 18:38:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:56.146 18:38:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:56.146 18:38:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:56.146 18:38:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:56.146 18:38:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:56.146 18:38:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.146 18:38:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:56.146 18:38:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:56.146 18:38:18 -- pm/common@17 -- $ local monitor 00:00:56.146 18:38:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.146 18:38:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.146 18:38:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.146 18:38:18 -- pm/common@21 -- $ date +%s 00:00:56.146 18:38:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.146 18:38:18 -- pm/common@21 -- $ date +%s 00:00:56.146 18:38:18 -- pm/common@25 -- $ sleep 1 00:00:56.146 18:38:18 -- pm/common@21 -- $ date +%s 00:00:56.146 18:38:18 -- pm/common@21 -- $ date +%s 00:00:56.146 18:38:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732124298 00:00:56.146 18:38:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732124298 00:00:56.146 18:38:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732124298 00:00:56.146 18:38:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732124298 00:00:56.146 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732124298_collect-vmstat.pm.log 00:00:56.147 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732124298_collect-cpu-load.pm.log 00:00:56.147 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732124298_collect-cpu-temp.pm.log 00:00:56.147 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732124298_collect-bmc-pm.bmc.pm.log 00:00:57.085 18:38:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:57.085 18:38:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:57.085 18:38:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:57.085 18:38:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.085 18:38:19 -- spdk/autobuild.sh@16 -- $ date -u 00:00:57.085 Wed Nov 20 05:38:19 PM UTC 2024 00:00:57.085 18:38:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:57.085 v25.01-pre-236-gbd9804982 00:00:57.085 18:38:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:57.085 18:38:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:57.085 18:38:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:57.085 18:38:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:57.085 18:38:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:57.085 18:38:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.345 ************************************ 00:00:57.345 START TEST ubsan 00:00:57.345 ************************************ 00:00:57.345 18:38:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:57.345 using ubsan 00:00:57.345 00:00:57.345 real 0m0.000s 00:00:57.345 user 0m0.000s 00:00:57.345 sys 0m0.000s 00:00:57.345 18:38:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:57.345 18:38:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:57.345 ************************************ 00:00:57.345 END TEST ubsan 00:00:57.345 ************************************ 00:00:57.345 18:38:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:57.345 18:38:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:57.345 18:38:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:57.345 18:38:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:57.345 18:38:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:57.345 18:38:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:57.345 18:38:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:57.345 18:38:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:57.345 18:38:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:57.345 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:57.345 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:57.914 Using 'verbs' RDMA provider 00:01:10.699 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:22.916 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:22.916 Creating mk/config.mk...done. 00:01:22.916 Creating mk/cc.flags.mk...done. 00:01:22.916 Type 'make' to build. 00:01:22.916 18:38:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:22.916 18:38:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.916 18:38:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.916 18:38:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.916 ************************************ 00:01:22.916 START TEST make 00:01:22.916 ************************************ 00:01:22.916 18:38:44 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:23.174 make[1]: Nothing to be done for 'all'. 00:01:24.562 The Meson build system 00:01:24.562 Version: 1.5.0 00:01:24.562 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:24.562 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.562 Build type: native build 00:01:24.562 Project name: libvfio-user 00:01:24.562 Project version: 0.0.1 00:01:24.562 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:24.562 C linker for the host machine: cc ld.bfd 2.40-14 00:01:24.562 Host machine cpu family: x86_64 00:01:24.562 Host machine cpu: x86_64 00:01:24.562 Run-time dependency threads found: YES 00:01:24.562 Library dl found: YES 00:01:24.562 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:24.562 Run-time dependency json-c found: YES 0.17 00:01:24.562 Run-time dependency cmocka found: YES 1.1.7 00:01:24.562 Program pytest-3 found: NO 00:01:24.562 Program flake8 found: NO 00:01:24.562 Program misspell-fixer found: NO 00:01:24.562 Program restructuredtext-lint found: NO 00:01:24.562 Program valgrind found: YES (/usr/bin/valgrind) 00:01:24.562 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.562 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.562 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.562 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.562 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:24.562 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:24.562 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.562 Build targets in project: 8 00:01:24.562 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:24.562 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:24.562 00:01:24.562 libvfio-user 0.0.1 00:01:24.562 00:01:24.562 User defined options 00:01:24.562 buildtype : debug 00:01:24.562 default_library: shared 00:01:24.562 libdir : /usr/local/lib 00:01:24.562 00:01:24.562 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:25.130 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.130 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:25.130 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:25.130 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:25.130 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:25.130 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:25.130 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:25.130 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:25.130 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:25.130 [9/37] Compiling C object samples/null.p/null.c.o 00:01:25.130 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:25.130 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:25.130 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:25.389 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:25.389 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:25.389 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:25.389 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:25.389 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:25.389 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:25.389 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:25.389 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:25.389 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:25.389 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:25.389 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:25.389 [24/37] Compiling C object samples/server.p/server.c.o 00:01:25.389 [25/37] Compiling C object samples/client.p/client.c.o 00:01:25.389 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:25.389 [27/37] Linking target samples/client 00:01:25.389 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:25.389 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:25.389 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:25.389 [31/37] Linking target test/unit_tests 00:01:25.647 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:25.647 [33/37] Linking target samples/lspci 00:01:25.648 [34/37] Linking target samples/server 00:01:25.648 [35/37] Linking target samples/null 00:01:25.648 [36/37] Linking target samples/gpio-pci-idio-16 00:01:25.648 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:25.648 INFO: autodetecting backend as ninja 00:01:25.648 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.648 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.214 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:26.214 ninja: no work to do. 00:01:31.480 The Meson build system 00:01:31.480 Version: 1.5.0 00:01:31.480 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:31.480 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:31.480 Build type: native build 00:01:31.480 Program cat found: YES (/usr/bin/cat) 00:01:31.480 Project name: DPDK 00:01:31.480 Project version: 24.03.0 00:01:31.480 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:31.480 C linker for the host machine: cc ld.bfd 2.40-14 00:01:31.480 Host machine cpu family: x86_64 00:01:31.480 Host machine cpu: x86_64 00:01:31.480 Message: ## Building in Developer Mode ## 00:01:31.480 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:31.480 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:31.480 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:31.480 Program python3 found: YES (/usr/bin/python3) 00:01:31.480 Program cat found: YES (/usr/bin/cat) 00:01:31.480 Compiler for C supports arguments -march=native: YES 00:01:31.480 Checking for size of "void *" : 8 00:01:31.480 Checking for size of "void *" : 8 (cached) 00:01:31.480 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:31.480 Library m found: YES 00:01:31.480 Library numa found: YES 00:01:31.480 Has header "numaif.h" : YES 00:01:31.480 Library fdt found: NO 00:01:31.480 Library execinfo found: NO 00:01:31.480 Has header "execinfo.h" : YES 00:01:31.480 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:31.480 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:31.480 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:31.480 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:31.480 Run-time dependency openssl found: YES 3.1.1 00:01:31.480 Run-time dependency libpcap found: YES 1.10.4 00:01:31.480 Has header "pcap.h" with dependency libpcap: YES 00:01:31.480 Compiler for C supports arguments -Wcast-qual: YES 00:01:31.480 Compiler for C supports arguments -Wdeprecated: YES 00:01:31.480 Compiler for C supports arguments -Wformat: YES 00:01:31.480 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:31.480 Compiler for C supports arguments -Wformat-security: NO 00:01:31.480 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.480 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:31.480 Compiler for C supports arguments -Wnested-externs: YES 00:01:31.480 Compiler for C supports arguments -Wold-style-definition: YES 00:01:31.480 Compiler for C supports arguments -Wpointer-arith: YES 00:01:31.480 Compiler for C supports arguments -Wsign-compare: YES 00:01:31.480 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:31.480 Compiler for C supports arguments -Wundef: YES 00:01:31.480 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.480 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:31.480 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:31.480 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.480 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:31.480 Program objdump found: YES (/usr/bin/objdump) 00:01:31.480 Compiler for C supports arguments -mavx512f: YES 00:01:31.480 Checking if "AVX512 checking" compiles: YES 00:01:31.480 Fetching value of define "__SSE4_2__" : 1 00:01:31.480 Fetching value of define "__AES__" : 1 00:01:31.480 Fetching value of define "__AVX__" : 1 00:01:31.480 Fetching value of define "__AVX2__" : 1 00:01:31.480 Fetching value of define "__AVX512BW__" : 1 00:01:31.480 Fetching value of define "__AVX512CD__" : 1 00:01:31.480 Fetching value of define "__AVX512DQ__" : 1 00:01:31.480 Fetching value of define "__AVX512F__" : 1 00:01:31.480 Fetching value of define "__AVX512VL__" : 1 00:01:31.480 Fetching value of define "__PCLMUL__" : 1 00:01:31.480 Fetching value of define "__RDRND__" : 1 00:01:31.480 Fetching value of define "__RDSEED__" : 1 00:01:31.480 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:31.480 Fetching value of define "__znver1__" : (undefined) 00:01:31.480 Fetching value of define "__znver2__" : (undefined) 00:01:31.480 Fetching value of define "__znver3__" : (undefined) 00:01:31.480 Fetching value of define "__znver4__" : (undefined) 00:01:31.480 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:31.480 Message: lib/log: Defining dependency "log" 00:01:31.480 Message: lib/kvargs: Defining dependency "kvargs" 00:01:31.480 Message: lib/telemetry: Defining dependency "telemetry" 00:01:31.480 Checking for function "getentropy" : NO 00:01:31.480 Message: lib/eal: Defining dependency "eal" 00:01:31.480 Message: lib/ring: Defining dependency "ring" 00:01:31.480 Message: lib/rcu: Defining dependency "rcu" 00:01:31.480 Message: lib/mempool: Defining dependency "mempool" 00:01:31.480 Message: lib/mbuf: Defining dependency "mbuf" 00:01:31.480 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:31.480 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:31.480 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:31.480 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:31.480 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:31.480 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:31.480 Compiler for C supports arguments -mpclmul: YES 00:01:31.480 Compiler for C supports arguments -maes: YES 00:01:31.480 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.480 Compiler for C supports arguments -mavx512bw: YES 00:01:31.480 Compiler for C supports arguments -mavx512dq: YES 00:01:31.480 Compiler for C supports arguments -mavx512vl: YES 00:01:31.480 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:31.480 Compiler for C supports arguments -mavx2: YES 00:01:31.480 Compiler for C supports arguments -mavx: YES 00:01:31.480 Message: lib/net: Defining dependency "net" 00:01:31.480 Message: lib/meter: Defining dependency "meter" 00:01:31.480 Message: lib/ethdev: Defining dependency "ethdev" 00:01:31.480 Message: lib/pci: Defining dependency "pci" 00:01:31.480 Message: lib/cmdline: Defining dependency "cmdline" 00:01:31.480 Message: lib/hash: Defining dependency "hash" 00:01:31.480 Message: lib/timer: Defining dependency "timer" 00:01:31.480 Message: lib/compressdev: Defining dependency "compressdev" 00:01:31.480 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:31.480 Message: lib/dmadev: Defining dependency "dmadev" 00:01:31.480 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:31.480 Message: lib/power: Defining dependency "power" 00:01:31.480 Message: lib/reorder: Defining dependency "reorder" 00:01:31.480 Message: lib/security: Defining dependency "security" 00:01:31.480 Has header "linux/userfaultfd.h" : YES 00:01:31.480 Has header "linux/vduse.h" : YES 00:01:31.480 Message: lib/vhost: Defining dependency "vhost" 00:01:31.480 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:31.480 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:31.480 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.480 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.480 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:31.480 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:31.480 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:31.480 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:31.480 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:31.480 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:31.480 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:31.480 Configuring doxy-api-html.conf using configuration 00:01:31.480 Configuring doxy-api-man.conf using configuration 00:01:31.480 Program mandb found: YES (/usr/bin/mandb) 00:01:31.480 Program sphinx-build found: NO 00:01:31.480 Configuring rte_build_config.h using configuration 00:01:31.480 Message: 00:01:31.480 ================= 00:01:31.480 Applications Enabled 00:01:31.480 ================= 00:01:31.480 00:01:31.480 apps: 00:01:31.480 00:01:31.480 00:01:31.480 Message: 00:01:31.480 ================= 00:01:31.480 Libraries Enabled 00:01:31.480 ================= 00:01:31.480 00:01:31.480 libs: 00:01:31.480 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:31.480 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:31.480 cryptodev, dmadev, power, reorder, security, vhost, 00:01:31.480 00:01:31.480 Message: 00:01:31.480 =============== 00:01:31.480 Drivers Enabled 00:01:31.480 =============== 00:01:31.480 00:01:31.480 common: 00:01:31.480 00:01:31.480 bus: 00:01:31.480 pci, vdev, 00:01:31.480 mempool: 00:01:31.480 ring, 00:01:31.480 dma: 00:01:31.480 00:01:31.480 net: 00:01:31.480 00:01:31.480 crypto: 00:01:31.480 00:01:31.480 compress: 00:01:31.480 00:01:31.480 vdpa: 00:01:31.480 00:01:31.480 00:01:31.480 Message: 00:01:31.480 ================= 00:01:31.480 Content Skipped 00:01:31.480 ================= 00:01:31.480 00:01:31.480 apps: 00:01:31.480 dumpcap: explicitly disabled via build config 00:01:31.480 graph: explicitly disabled via build config 00:01:31.480 pdump: explicitly disabled via build config 00:01:31.480 proc-info: explicitly disabled via build config 00:01:31.480 test-acl: explicitly disabled via build config 00:01:31.480 test-bbdev: explicitly disabled via build config 00:01:31.480 test-cmdline: explicitly disabled via build config 00:01:31.480 test-compress-perf: explicitly disabled via build config 00:01:31.480 test-crypto-perf: explicitly disabled via build config 00:01:31.480 test-dma-perf: explicitly disabled via build config 00:01:31.481 test-eventdev: explicitly disabled via build config 00:01:31.481 test-fib: explicitly disabled via build config 00:01:31.481 test-flow-perf: explicitly disabled via build config 00:01:31.481 test-gpudev: explicitly disabled via build config 00:01:31.481 test-mldev: explicitly disabled via build config 00:01:31.481 test-pipeline: explicitly disabled via build config 00:01:31.481 test-pmd: explicitly disabled via build config 00:01:31.481 test-regex: explicitly disabled via build config 00:01:31.481 test-sad: explicitly disabled via build config 00:01:31.481 test-security-perf: explicitly disabled via build config 00:01:31.481 00:01:31.481 libs: 00:01:31.481 argparse: explicitly disabled via build config 00:01:31.481 metrics: explicitly disabled via build config 00:01:31.481 acl: explicitly disabled via build config 00:01:31.481 bbdev: explicitly disabled via build config 00:01:31.481 bitratestats: explicitly disabled via build config 00:01:31.481 bpf: explicitly disabled via build config 00:01:31.481 cfgfile: explicitly disabled via build config 00:01:31.481 distributor: explicitly disabled via build config 00:01:31.481 efd: explicitly disabled via build config 00:01:31.481 eventdev: explicitly disabled via build config 00:01:31.481 dispatcher: explicitly disabled via build config 00:01:31.481 gpudev: explicitly disabled via build config 00:01:31.481 gro: explicitly disabled via build config 00:01:31.481 gso: explicitly disabled via build config 00:01:31.481 ip_frag: explicitly disabled via build config 00:01:31.481 jobstats: explicitly disabled via build config 00:01:31.481 latencystats: explicitly disabled via build config 00:01:31.481 lpm: explicitly disabled via build config 00:01:31.481 member: explicitly disabled via build config 00:01:31.481 pcapng: explicitly disabled via build config 00:01:31.481 rawdev: explicitly disabled via build config 00:01:31.481 regexdev: explicitly disabled via build config 00:01:31.481 mldev: explicitly disabled via build config 00:01:31.481 rib: explicitly disabled via build config 00:01:31.481 sched: explicitly disabled via build config 00:01:31.481 stack: explicitly disabled via build config 00:01:31.481 ipsec: explicitly disabled via build config 00:01:31.481 pdcp: explicitly disabled via build config 00:01:31.481 fib: explicitly disabled via build config 00:01:31.481 port: explicitly disabled via build config 00:01:31.481 pdump: explicitly disabled via build config 00:01:31.481 table: explicitly disabled via build config 00:01:31.481 pipeline: explicitly disabled via build config 00:01:31.481 graph: explicitly disabled via build config 00:01:31.481 node: explicitly disabled via build config 00:01:31.481 00:01:31.481 drivers: 00:01:31.481 common/cpt: not in enabled drivers build config 00:01:31.481 common/dpaax: not in enabled drivers build config 00:01:31.481 common/iavf: not in enabled drivers build config 00:01:31.481 common/idpf: not in enabled drivers build config 00:01:31.481 common/ionic: not in enabled drivers build config 00:01:31.481 common/mvep: not in enabled drivers build config 00:01:31.481 common/octeontx: not in enabled drivers build config 00:01:31.481 bus/auxiliary: not in enabled drivers build config 00:01:31.481 bus/cdx: not in enabled drivers build config 00:01:31.481 bus/dpaa: not in enabled drivers build config 00:01:31.481 bus/fslmc: not in enabled drivers build config 00:01:31.481 bus/ifpga: not in enabled drivers build config 00:01:31.481 bus/platform: not in enabled drivers build config 00:01:31.481 bus/uacce: not in enabled drivers build config 00:01:31.481 bus/vmbus: not in enabled drivers build config 00:01:31.481 common/cnxk: not in enabled drivers build config 00:01:31.481 common/mlx5: not in enabled drivers build config 00:01:31.481 common/nfp: not in enabled drivers build config 00:01:31.481 common/nitrox: not in enabled drivers build config 00:01:31.481 common/qat: not in enabled drivers build config 00:01:31.481 common/sfc_efx: not in enabled drivers build config 00:01:31.481 mempool/bucket: not in enabled drivers build config 00:01:31.481 mempool/cnxk: not in enabled drivers build config 00:01:31.481 mempool/dpaa: not in enabled drivers build config 00:01:31.481 mempool/dpaa2: not in enabled drivers build config 00:01:31.481 mempool/octeontx: not in enabled drivers build config 00:01:31.481 mempool/stack: not in enabled drivers build config 00:01:31.481 dma/cnxk: not in enabled drivers build config 00:01:31.481 dma/dpaa: not in enabled drivers build config 00:01:31.481 dma/dpaa2: not in enabled drivers build config 00:01:31.481 dma/hisilicon: not in enabled drivers build config 00:01:31.481 dma/idxd: not in enabled drivers build config 00:01:31.481 dma/ioat: not in enabled drivers build config 00:01:31.481 dma/skeleton: not in enabled drivers build config 00:01:31.481 net/af_packet: not in enabled drivers build config 00:01:31.481 net/af_xdp: not in enabled drivers build config 00:01:31.481 net/ark: not in enabled drivers build config 00:01:31.481 net/atlantic: not in enabled drivers build config 00:01:31.481 net/avp: not in enabled drivers build config 00:01:31.481 net/axgbe: not in enabled drivers build config 00:01:31.481 net/bnx2x: not in enabled drivers build config 00:01:31.481 net/bnxt: not in enabled drivers build config 00:01:31.481 net/bonding: not in enabled drivers build config 00:01:31.481 net/cnxk: not in enabled drivers build config 00:01:31.481 net/cpfl: not in enabled drivers build config 00:01:31.481 net/cxgbe: not in enabled drivers build config 00:01:31.481 net/dpaa: not in enabled drivers build config 00:01:31.481 net/dpaa2: not in enabled drivers build config 00:01:31.481 net/e1000: not in enabled drivers build config 00:01:31.481 net/ena: not in enabled drivers build config 00:01:31.481 net/enetc: not in enabled drivers build config 00:01:31.481 net/enetfec: not in enabled drivers build config 00:01:31.481 net/enic: not in enabled drivers build config 00:01:31.481 net/failsafe: not in enabled drivers build config 00:01:31.481 net/fm10k: not in enabled drivers build config 00:01:31.481 net/gve: not in enabled drivers build config 00:01:31.481 net/hinic: not in enabled drivers build config 00:01:31.481 net/hns3: not in enabled drivers build config 00:01:31.481 net/i40e: not in enabled drivers build config 00:01:31.481 net/iavf: not in enabled drivers build config 00:01:31.481 net/ice: not in enabled drivers build config 00:01:31.481 net/idpf: not in enabled drivers build config 00:01:31.481 net/igc: not in enabled drivers build config 00:01:31.481 net/ionic: not in enabled drivers build config 00:01:31.481 net/ipn3ke: not in enabled drivers build config 00:01:31.481 net/ixgbe: not in enabled drivers build config 00:01:31.481 net/mana: not in enabled drivers build config 00:01:31.481 net/memif: not in enabled drivers build config 00:01:31.481 net/mlx4: not in enabled drivers build config 00:01:31.481 net/mlx5: not in enabled drivers build config 00:01:31.481 net/mvneta: not in enabled drivers build config 00:01:31.481 net/mvpp2: not in enabled drivers build config 00:01:31.481 net/netvsc: not in enabled drivers build config 00:01:31.481 net/nfb: not in enabled drivers build config 00:01:31.481 net/nfp: not in enabled drivers build config 00:01:31.481 net/ngbe: not in enabled drivers build config 00:01:31.481 net/null: not in enabled drivers build config 00:01:31.481 net/octeontx: not in enabled drivers build config 00:01:31.481 net/octeon_ep: not in enabled drivers build config 00:01:31.481 net/pcap: not in enabled drivers build config 00:01:31.481 net/pfe: not in enabled drivers build config 00:01:31.481 net/qede: not in enabled drivers build config 00:01:31.481 net/ring: not in enabled drivers build config 00:01:31.481 net/sfc: not in enabled drivers build config 00:01:31.481 net/softnic: not in enabled drivers build config 00:01:31.481 net/tap: not in enabled drivers build config 00:01:31.481 net/thunderx: not in enabled drivers build config 00:01:31.481 net/txgbe: not in enabled drivers build config 00:01:31.481 net/vdev_netvsc: not in enabled drivers build config 00:01:31.481 net/vhost: not in enabled drivers build config 00:01:31.481 net/virtio: not in enabled drivers build config 00:01:31.481 net/vmxnet3: not in enabled drivers build config 00:01:31.481 raw/*: missing internal dependency, "rawdev" 00:01:31.481 crypto/armv8: not in enabled drivers build config 00:01:31.481 crypto/bcmfs: not in enabled drivers build config 00:01:31.481 crypto/caam_jr: not in enabled drivers build config 00:01:31.481 crypto/ccp: not in enabled drivers build config 00:01:31.481 crypto/cnxk: not in enabled drivers build config 00:01:31.481 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.481 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.481 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.481 crypto/mlx5: not in enabled drivers build config 00:01:31.481 crypto/mvsam: not in enabled drivers build config 00:01:31.481 crypto/nitrox: not in enabled drivers build config 00:01:31.481 crypto/null: not in enabled drivers build config 00:01:31.481 crypto/octeontx: not in enabled drivers build config 00:01:31.482 crypto/openssl: not in enabled drivers build config 00:01:31.482 crypto/scheduler: not in enabled drivers build config 00:01:31.482 crypto/uadk: not in enabled drivers build config 00:01:31.482 crypto/virtio: not in enabled drivers build config 00:01:31.482 compress/isal: not in enabled drivers build config 00:01:31.482 compress/mlx5: not in enabled drivers build config 00:01:31.482 compress/nitrox: not in enabled drivers build config 00:01:31.482 compress/octeontx: not in enabled drivers build config 00:01:31.482 compress/zlib: not in enabled drivers build config 00:01:31.482 regex/*: missing internal dependency, "regexdev" 00:01:31.482 ml/*: missing internal dependency, "mldev" 00:01:31.482 vdpa/ifc: not in enabled drivers build config 00:01:31.482 vdpa/mlx5: not in enabled drivers build config 00:01:31.482 vdpa/nfp: not in enabled drivers build config 00:01:31.482 vdpa/sfc: not in enabled drivers build config 00:01:31.482 event/*: missing internal dependency, "eventdev" 00:01:31.482 baseband/*: missing internal dependency, "bbdev" 00:01:31.482 gpu/*: missing internal dependency, "gpudev" 00:01:31.482 00:01:31.482 00:01:31.482 Build targets in project: 85 00:01:31.482 00:01:31.482 DPDK 24.03.0 00:01:31.482 00:01:31.482 User defined options 00:01:31.482 buildtype : debug 00:01:31.482 default_library : shared 00:01:31.482 libdir : lib 00:01:31.482 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.482 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:31.482 c_link_args : 00:01:31.482 cpu_instruction_set: native 00:01:31.482 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:31.482 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:31.482 enable_docs : false 00:01:31.482 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:31.482 enable_kmods : false 00:01:31.482 max_lcores : 128 00:01:31.482 tests : false 00:01:31.482 00:01:31.482 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.755 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:31.755 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:32.018 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:32.018 [3/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:32.018 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:32.018 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:32.018 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:32.018 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:32.018 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:32.018 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:32.018 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:32.018 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:32.018 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:32.018 [13/268] Linking static target lib/librte_kvargs.a 00:01:32.018 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:32.018 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:32.018 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:32.018 [17/268] Linking static target lib/librte_log.a 00:01:32.018 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:32.018 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:32.277 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.277 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:32.277 [22/268] Linking static target lib/librte_pci.a 00:01:32.278 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.278 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:32.278 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.278 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.278 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.278 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.278 [29/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.278 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.278 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.278 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.278 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.278 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:32.278 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.540 [36/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.540 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:32.540 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.540 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:32.540 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.540 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.540 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.540 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.540 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:32.540 [45/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.540 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:32.540 [47/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.540 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.540 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.540 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:32.540 [51/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.540 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.540 [53/268] Linking static target lib/librte_meter.a 00:01:32.540 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.540 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:32.540 [56/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.540 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.540 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:32.540 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.540 [60/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.540 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.540 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.540 [63/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.540 [64/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.540 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.540 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.540 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.540 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.540 [69/268] Linking static target lib/librte_ring.a 00:01:32.540 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.540 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:32.540 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.540 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:32.540 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.540 [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.540 [76/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.540 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:32.540 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.540 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.540 [80/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.540 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.540 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.540 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:32.540 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.540 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.540 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:32.540 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.540 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.540 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.540 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.540 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.540 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.540 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.540 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.540 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.540 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:32.540 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.540 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.540 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.540 [100/268] Linking static target lib/librte_telemetry.a 00:01:32.540 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:32.540 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.540 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.540 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:32.540 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.540 [106/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:32.540 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.540 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.540 [109/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.540 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.540 [111/268] Linking static target lib/librte_rcu.a 00:01:32.540 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.540 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.540 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.540 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.540 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.540 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.540 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.540 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.540 [120/268] Linking static target lib/librte_mempool.a 00:01:32.800 [121/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.800 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.800 [123/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:32.800 [124/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:32.800 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.800 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.800 [127/268] Linking static target lib/librte_net.a 00:01:32.800 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.800 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.800 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.800 [131/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:32.800 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:32.800 [133/268] Linking static target lib/librte_eal.a 00:01:32.800 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.800 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.800 [136/268] Linking static target lib/librte_mbuf.a 00:01:32.800 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.800 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.800 [139/268] Linking static target lib/librte_cmdline.a 00:01:32.800 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.800 [141/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.800 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.800 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.801 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.801 [145/268] Linking target lib/librte_log.so.24.1 00:01:32.801 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.801 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.801 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:33.059 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:33.059 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:33.059 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:33.059 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:33.059 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.059 [154/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.059 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:33.059 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:33.059 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:33.059 [158/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:33.059 [159/268] Linking static target lib/librte_dmadev.a 00:01:33.059 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:33.059 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.059 [162/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.059 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.059 [164/268] Linking static target lib/librte_timer.a 00:01:33.059 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:33.059 [166/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:33.059 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:33.059 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:33.059 [169/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:33.059 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:33.059 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:33.059 [172/268] Linking static target lib/librte_security.a 00:01:33.060 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:33.060 [174/268] Linking target lib/librte_kvargs.so.24.1 00:01:33.060 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:33.060 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:33.060 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:33.060 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:33.060 [179/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.060 [180/268] Linking static target lib/librte_compressdev.a 00:01:33.060 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:33.060 [182/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:33.060 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:33.060 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:33.060 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:33.060 [186/268] Linking target lib/librte_telemetry.so.24.1 00:01:33.318 [187/268] Linking static target lib/librte_power.a 00:01:33.318 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.318 [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:33.318 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:33.318 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:33.318 [192/268] Linking static target drivers/librte_bus_vdev.a 00:01:33.318 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:33.318 [194/268] Linking static target lib/librte_reorder.a 00:01:33.318 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.318 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:33.319 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.319 [198/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:33.319 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:33.319 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:33.319 [201/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:33.319 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:33.319 [203/268] Linking static target lib/librte_hash.a 00:01:33.319 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.319 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:33.319 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.319 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:33.578 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.578 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.578 [210/268] Linking static target drivers/librte_mempool_ring.a 00:01:33.578 [211/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.578 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.578 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.578 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:33.578 [215/268] Linking static target lib/librte_cryptodev.a 00:01:33.578 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.578 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:33.578 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.578 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.578 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.578 [221/268] Linking static target lib/librte_ethdev.a 00:01:33.837 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.096 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.096 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:34.096 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.096 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.355 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.292 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:35.292 [229/268] Linking static target lib/librte_vhost.a 00:01:35.551 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.926 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.198 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.765 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.024 [234/268] Linking target lib/librte_eal.so.24.1 00:01:43.024 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:43.024 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:43.024 [237/268] Linking target lib/librte_timer.so.24.1 00:01:43.024 [238/268] Linking target lib/librte_ring.so.24.1 00:01:43.024 [239/268] Linking target lib/librte_meter.so.24.1 00:01:43.024 [240/268] Linking target lib/librte_pci.so.24.1 00:01:43.024 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:43.284 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:43.284 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:43.284 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:43.284 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:43.284 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:43.284 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:43.284 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:43.284 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:43.284 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:43.284 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:43.543 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:43.543 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:43.543 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:43.543 [255/268] Linking target lib/librte_net.so.24.1 00:01:43.543 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:01:43.543 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:43.543 [258/268] Linking target lib/librte_reorder.so.24.1 00:01:43.803 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:43.803 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:43.803 [261/268] Linking target lib/librte_hash.so.24.1 00:01:43.803 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:43.803 [263/268] Linking target lib/librte_security.so.24.1 00:01:43.803 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:44.062 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:44.062 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:44.062 [267/268] Linking target lib/librte_power.so.24.1 00:01:44.062 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:44.062 INFO: autodetecting backend as ninja 00:01:44.062 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:54.044 CC lib/ut/ut.o 00:01:54.044 CC lib/ut_mock/mock.o 00:01:54.044 CC lib/log/log.o 00:01:54.044 CC lib/log/log_flags.o 00:01:54.044 CC lib/log/log_deprecated.o 00:01:54.304 LIB libspdk_ut_mock.a 00:01:54.304 LIB libspdk_ut.a 00:01:54.304 LIB libspdk_log.a 00:01:54.304 SO libspdk_ut_mock.so.6.0 00:01:54.304 SO libspdk_ut.so.2.0 00:01:54.304 SO libspdk_log.so.7.1 00:01:54.304 SYMLINK libspdk_ut_mock.so 00:01:54.304 SYMLINK libspdk_ut.so 00:01:54.304 SYMLINK libspdk_log.so 00:01:54.871 CC lib/util/base64.o 00:01:54.871 CC lib/util/bit_array.o 00:01:54.871 CC lib/ioat/ioat.o 00:01:54.871 CC lib/dma/dma.o 00:01:54.871 CC lib/util/cpuset.o 00:01:54.871 CC lib/util/crc16.o 00:01:54.871 CC lib/util/crc32.o 00:01:54.871 CC lib/util/crc32c.o 00:01:54.871 CC lib/util/crc32_ieee.o 00:01:54.871 CC lib/util/crc64.o 00:01:54.871 CC lib/util/dif.o 00:01:54.871 CXX lib/trace_parser/trace.o 00:01:54.871 CC lib/util/fd.o 00:01:54.871 CC lib/util/fd_group.o 00:01:54.871 CC lib/util/file.o 00:01:54.871 CC lib/util/hexlify.o 00:01:54.871 CC lib/util/iov.o 00:01:54.871 CC lib/util/math.o 00:01:54.871 CC lib/util/net.o 00:01:54.871 CC lib/util/pipe.o 00:01:54.871 CC lib/util/strerror_tls.o 00:01:54.871 CC lib/util/string.o 00:01:54.871 CC lib/util/uuid.o 00:01:54.871 CC lib/util/xor.o 00:01:54.871 CC lib/util/zipf.o 00:01:54.871 CC lib/util/md5.o 00:01:54.871 CC lib/vfio_user/host/vfio_user.o 00:01:54.871 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.871 LIB libspdk_dma.a 00:01:55.130 SO libspdk_dma.so.5.0 00:01:55.130 LIB libspdk_ioat.a 00:01:55.130 SYMLINK libspdk_dma.so 00:01:55.130 SO libspdk_ioat.so.7.0 00:01:55.130 LIB libspdk_vfio_user.a 00:01:55.130 SYMLINK libspdk_ioat.so 00:01:55.130 SO libspdk_vfio_user.so.5.0 00:01:55.130 SYMLINK libspdk_vfio_user.so 00:01:55.389 LIB libspdk_util.a 00:01:55.389 SO libspdk_util.so.10.1 00:01:55.389 SYMLINK libspdk_util.so 00:01:55.389 LIB libspdk_trace_parser.a 00:01:55.389 SO libspdk_trace_parser.so.6.0 00:01:55.648 SYMLINK libspdk_trace_parser.so 00:01:55.648 CC lib/json/json_parse.o 00:01:55.648 CC lib/json/json_util.o 00:01:55.648 CC lib/json/json_write.o 00:01:55.648 CC lib/env_dpdk/env.o 00:01:55.648 CC lib/idxd/idxd.o 00:01:55.648 CC lib/env_dpdk/memory.o 00:01:55.648 CC lib/idxd/idxd_user.o 00:01:55.648 CC lib/env_dpdk/pci.o 00:01:55.648 CC lib/idxd/idxd_kernel.o 00:01:55.648 CC lib/env_dpdk/init.o 00:01:55.648 CC lib/env_dpdk/threads.o 00:01:55.648 CC lib/env_dpdk/pci_ioat.o 00:01:55.648 CC lib/conf/conf.o 00:01:55.648 CC lib/vmd/vmd.o 00:01:55.648 CC lib/env_dpdk/pci_virtio.o 00:01:55.648 CC lib/rdma_utils/rdma_utils.o 00:01:55.648 CC lib/vmd/led.o 00:01:55.648 CC lib/env_dpdk/pci_vmd.o 00:01:55.648 CC lib/env_dpdk/pci_idxd.o 00:01:55.648 CC lib/env_dpdk/pci_event.o 00:01:55.648 CC lib/env_dpdk/sigbus_handler.o 00:01:55.648 CC lib/env_dpdk/pci_dpdk.o 00:01:55.648 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.648 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.907 LIB libspdk_conf.a 00:01:56.166 LIB libspdk_json.a 00:01:56.166 SO libspdk_conf.so.6.0 00:01:56.166 LIB libspdk_rdma_utils.a 00:01:56.166 SO libspdk_rdma_utils.so.1.0 00:01:56.166 SO libspdk_json.so.6.0 00:01:56.166 SYMLINK libspdk_conf.so 00:01:56.166 SYMLINK libspdk_rdma_utils.so 00:01:56.166 SYMLINK libspdk_json.so 00:01:56.166 LIB libspdk_idxd.a 00:01:56.425 SO libspdk_idxd.so.12.1 00:01:56.425 LIB libspdk_vmd.a 00:01:56.425 SO libspdk_vmd.so.6.0 00:01:56.425 SYMLINK libspdk_idxd.so 00:01:56.425 SYMLINK libspdk_vmd.so 00:01:56.425 CC lib/rdma_provider/common.o 00:01:56.425 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.425 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.425 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.425 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.425 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.683 LIB libspdk_rdma_provider.a 00:01:56.683 SO libspdk_rdma_provider.so.7.0 00:01:56.683 LIB libspdk_jsonrpc.a 00:01:56.683 SO libspdk_jsonrpc.so.6.0 00:01:56.683 SYMLINK libspdk_rdma_provider.so 00:01:56.683 SYMLINK libspdk_jsonrpc.so 00:01:56.942 LIB libspdk_env_dpdk.a 00:01:56.942 SO libspdk_env_dpdk.so.15.1 00:01:56.942 SYMLINK libspdk_env_dpdk.so 00:01:56.942 CC lib/rpc/rpc.o 00:01:57.200 LIB libspdk_rpc.a 00:01:57.200 SO libspdk_rpc.so.6.0 00:01:57.549 SYMLINK libspdk_rpc.so 00:01:57.866 CC lib/notify/notify.o 00:01:57.866 CC lib/trace/trace.o 00:01:57.866 CC lib/notify/notify_rpc.o 00:01:57.866 CC lib/trace/trace_flags.o 00:01:57.866 CC lib/trace/trace_rpc.o 00:01:57.866 CC lib/keyring/keyring.o 00:01:57.866 CC lib/keyring/keyring_rpc.o 00:01:57.866 LIB libspdk_notify.a 00:01:57.866 SO libspdk_notify.so.6.0 00:01:57.866 LIB libspdk_keyring.a 00:01:57.866 LIB libspdk_trace.a 00:01:57.866 SYMLINK libspdk_notify.so 00:01:57.866 SO libspdk_keyring.so.2.0 00:01:57.866 SO libspdk_trace.so.11.0 00:01:58.125 SYMLINK libspdk_keyring.so 00:01:58.125 SYMLINK libspdk_trace.so 00:01:58.383 CC lib/thread/thread.o 00:01:58.384 CC lib/thread/iobuf.o 00:01:58.384 CC lib/sock/sock.o 00:01:58.384 CC lib/sock/sock_rpc.o 00:01:58.642 LIB libspdk_sock.a 00:01:58.642 SO libspdk_sock.so.10.0 00:01:58.642 SYMLINK libspdk_sock.so 00:01:59.211 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.211 CC lib/nvme/nvme_ctrlr.o 00:01:59.211 CC lib/nvme/nvme_fabric.o 00:01:59.211 CC lib/nvme/nvme_ns_cmd.o 00:01:59.211 CC lib/nvme/nvme_ns.o 00:01:59.211 CC lib/nvme/nvme_pcie_common.o 00:01:59.211 CC lib/nvme/nvme_pcie.o 00:01:59.211 CC lib/nvme/nvme_qpair.o 00:01:59.211 CC lib/nvme/nvme.o 00:01:59.211 CC lib/nvme/nvme_quirks.o 00:01:59.211 CC lib/nvme/nvme_transport.o 00:01:59.211 CC lib/nvme/nvme_discovery.o 00:01:59.211 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.211 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.211 CC lib/nvme/nvme_tcp.o 00:01:59.211 CC lib/nvme/nvme_opal.o 00:01:59.211 CC lib/nvme/nvme_poll_group.o 00:01:59.211 CC lib/nvme/nvme_io_msg.o 00:01:59.211 CC lib/nvme/nvme_zns.o 00:01:59.211 CC lib/nvme/nvme_stubs.o 00:01:59.211 CC lib/nvme/nvme_auth.o 00:01:59.211 CC lib/nvme/nvme_cuse.o 00:01:59.211 CC lib/nvme/nvme_vfio_user.o 00:01:59.211 CC lib/nvme/nvme_rdma.o 00:01:59.482 LIB libspdk_thread.a 00:01:59.482 SO libspdk_thread.so.11.0 00:01:59.482 SYMLINK libspdk_thread.so 00:01:59.739 CC lib/virtio/virtio.o 00:01:59.739 CC lib/virtio/virtio_vhost_user.o 00:01:59.739 CC lib/virtio/virtio_vfio_user.o 00:01:59.739 CC lib/virtio/virtio_pci.o 00:01:59.739 CC lib/fsdev/fsdev.o 00:01:59.739 CC lib/blob/blobstore.o 00:01:59.739 CC lib/fsdev/fsdev_io.o 00:01:59.739 CC lib/blob/request.o 00:01:59.739 CC lib/fsdev/fsdev_rpc.o 00:01:59.739 CC lib/blob/zeroes.o 00:01:59.739 CC lib/blob/blob_bs_dev.o 00:01:59.739 CC lib/vfu_tgt/tgt_endpoint.o 00:01:59.739 CC lib/vfu_tgt/tgt_rpc.o 00:01:59.739 CC lib/accel/accel.o 00:01:59.739 CC lib/init/subsystem_rpc.o 00:01:59.739 CC lib/init/json_config.o 00:01:59.739 CC lib/init/subsystem.o 00:01:59.739 CC lib/accel/accel_rpc.o 00:01:59.739 CC lib/accel/accel_sw.o 00:01:59.739 CC lib/init/rpc.o 00:01:59.996 LIB libspdk_init.a 00:01:59.996 LIB libspdk_virtio.a 00:01:59.996 SO libspdk_init.so.6.0 00:01:59.996 LIB libspdk_vfu_tgt.a 00:01:59.996 SO libspdk_virtio.so.7.0 00:01:59.996 SO libspdk_vfu_tgt.so.3.0 00:02:00.253 SYMLINK libspdk_init.so 00:02:00.253 SYMLINK libspdk_virtio.so 00:02:00.253 SYMLINK libspdk_vfu_tgt.so 00:02:00.253 LIB libspdk_fsdev.a 00:02:00.253 SO libspdk_fsdev.so.2.0 00:02:00.510 SYMLINK libspdk_fsdev.so 00:02:00.510 CC lib/event/app.o 00:02:00.510 CC lib/event/reactor.o 00:02:00.510 CC lib/event/log_rpc.o 00:02:00.510 CC lib/event/app_rpc.o 00:02:00.510 CC lib/event/scheduler_static.o 00:02:00.510 LIB libspdk_accel.a 00:02:00.801 SO libspdk_accel.so.16.0 00:02:00.801 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:00.801 SYMLINK libspdk_accel.so 00:02:00.801 LIB libspdk_nvme.a 00:02:00.801 LIB libspdk_event.a 00:02:00.801 SO libspdk_event.so.14.0 00:02:00.801 SO libspdk_nvme.so.15.0 00:02:00.801 SYMLINK libspdk_event.so 00:02:01.058 CC lib/bdev/bdev.o 00:02:01.058 CC lib/bdev/bdev_rpc.o 00:02:01.058 CC lib/bdev/bdev_zone.o 00:02:01.058 CC lib/bdev/part.o 00:02:01.058 CC lib/bdev/scsi_nvme.o 00:02:01.058 SYMLINK libspdk_nvme.so 00:02:01.058 LIB libspdk_fuse_dispatcher.a 00:02:01.316 SO libspdk_fuse_dispatcher.so.1.0 00:02:01.316 SYMLINK libspdk_fuse_dispatcher.so 00:02:01.882 LIB libspdk_blob.a 00:02:01.882 SO libspdk_blob.so.11.0 00:02:02.139 SYMLINK libspdk_blob.so 00:02:02.397 CC lib/blobfs/blobfs.o 00:02:02.397 CC lib/blobfs/tree.o 00:02:02.397 CC lib/lvol/lvol.o 00:02:02.964 LIB libspdk_bdev.a 00:02:02.964 LIB libspdk_blobfs.a 00:02:02.964 SO libspdk_bdev.so.17.0 00:02:02.964 SO libspdk_blobfs.so.10.0 00:02:02.964 SYMLINK libspdk_blobfs.so 00:02:02.964 LIB libspdk_lvol.a 00:02:02.964 SYMLINK libspdk_bdev.so 00:02:02.964 SO libspdk_lvol.so.10.0 00:02:03.223 SYMLINK libspdk_lvol.so 00:02:03.223 CC lib/scsi/lun.o 00:02:03.223 CC lib/scsi/dev.o 00:02:03.223 CC lib/scsi/port.o 00:02:03.223 CC lib/scsi/scsi_bdev.o 00:02:03.224 CC lib/scsi/scsi.o 00:02:03.224 CC lib/scsi/scsi_pr.o 00:02:03.224 CC lib/scsi/scsi_rpc.o 00:02:03.224 CC lib/scsi/task.o 00:02:03.224 CC lib/ftl/ftl_core.o 00:02:03.224 CC lib/nbd/nbd.o 00:02:03.224 CC lib/ftl/ftl_init.o 00:02:03.224 CC lib/ftl/ftl_layout.o 00:02:03.224 CC lib/ftl/ftl_debug.o 00:02:03.224 CC lib/nvmf/ctrlr.o 00:02:03.224 CC lib/nbd/nbd_rpc.o 00:02:03.224 CC lib/nvmf/ctrlr_discovery.o 00:02:03.224 CC lib/ublk/ublk.o 00:02:03.224 CC lib/ftl/ftl_io.o 00:02:03.224 CC lib/nvmf/ctrlr_bdev.o 00:02:03.224 CC lib/ublk/ublk_rpc.o 00:02:03.224 CC lib/ftl/ftl_sb.o 00:02:03.224 CC lib/ftl/ftl_l2p.o 00:02:03.224 CC lib/nvmf/subsystem.o 00:02:03.224 CC lib/ftl/ftl_l2p_flat.o 00:02:03.224 CC lib/nvmf/nvmf.o 00:02:03.224 CC lib/ftl/ftl_nv_cache.o 00:02:03.224 CC lib/ftl/ftl_band.o 00:02:03.224 CC lib/nvmf/nvmf_rpc.o 00:02:03.224 CC lib/nvmf/transport.o 00:02:03.224 CC lib/ftl/ftl_band_ops.o 00:02:03.224 CC lib/ftl/ftl_writer.o 00:02:03.224 CC lib/ftl/ftl_rq.o 00:02:03.224 CC lib/nvmf/stubs.o 00:02:03.224 CC lib/nvmf/tcp.o 00:02:03.484 CC lib/ftl/ftl_reloc.o 00:02:03.484 CC lib/nvmf/mdns_server.o 00:02:03.484 CC lib/nvmf/vfio_user.o 00:02:03.484 CC lib/ftl/ftl_l2p_cache.o 00:02:03.484 CC lib/ftl/ftl_p2l.o 00:02:03.484 CC lib/ftl/ftl_p2l_log.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt.o 00:02:03.484 CC lib/nvmf/rdma.o 00:02:03.484 CC lib/nvmf/auth.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:03.484 CC lib/ftl/utils/ftl_conf.o 00:02:03.484 CC lib/ftl/utils/ftl_md.o 00:02:03.484 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:03.484 CC lib/ftl/utils/ftl_mempool.o 00:02:03.484 CC lib/ftl/utils/ftl_bitmap.o 00:02:03.484 CC lib/ftl/utils/ftl_property.o 00:02:03.484 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:03.484 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:03.484 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:03.484 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:03.484 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:03.484 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:03.484 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:03.484 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:03.484 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:03.484 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:03.484 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:03.484 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:03.484 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:03.484 CC lib/ftl/base/ftl_base_dev.o 00:02:03.484 CC lib/ftl/ftl_trace.o 00:02:03.484 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.049 LIB libspdk_nbd.a 00:02:04.049 SO libspdk_nbd.so.7.0 00:02:04.049 LIB libspdk_scsi.a 00:02:04.049 SYMLINK libspdk_nbd.so 00:02:04.049 SO libspdk_scsi.so.9.0 00:02:04.049 LIB libspdk_ublk.a 00:02:04.049 SO libspdk_ublk.so.3.0 00:02:04.049 SYMLINK libspdk_scsi.so 00:02:04.049 SYMLINK libspdk_ublk.so 00:02:04.308 CC lib/iscsi/conn.o 00:02:04.308 CC lib/iscsi/init_grp.o 00:02:04.308 CC lib/iscsi/iscsi.o 00:02:04.308 CC lib/iscsi/param.o 00:02:04.308 CC lib/iscsi/portal_grp.o 00:02:04.308 CC lib/iscsi/tgt_node.o 00:02:04.309 CC lib/iscsi/iscsi_subsystem.o 00:02:04.309 CC lib/iscsi/iscsi_rpc.o 00:02:04.309 CC lib/iscsi/task.o 00:02:04.309 CC lib/vhost/vhost.o 00:02:04.309 CC lib/vhost/vhost_rpc.o 00:02:04.309 CC lib/vhost/vhost_scsi.o 00:02:04.309 CC lib/vhost/vhost_blk.o 00:02:04.309 CC lib/vhost/rte_vhost_user.o 00:02:04.568 LIB libspdk_ftl.a 00:02:04.568 SO libspdk_ftl.so.9.0 00:02:04.826 SYMLINK libspdk_ftl.so 00:02:05.084 LIB libspdk_nvmf.a 00:02:05.084 SO libspdk_nvmf.so.20.0 00:02:05.084 LIB libspdk_vhost.a 00:02:05.343 SO libspdk_vhost.so.8.0 00:02:05.343 SYMLINK libspdk_vhost.so 00:02:05.343 SYMLINK libspdk_nvmf.so 00:02:05.343 LIB libspdk_iscsi.a 00:02:05.343 SO libspdk_iscsi.so.8.0 00:02:05.603 SYMLINK libspdk_iscsi.so 00:02:06.170 CC module/env_dpdk/env_dpdk_rpc.o 00:02:06.170 CC module/vfu_device/vfu_virtio.o 00:02:06.170 CC module/vfu_device/vfu_virtio_scsi.o 00:02:06.170 CC module/vfu_device/vfu_virtio_blk.o 00:02:06.170 CC module/vfu_device/vfu_virtio_rpc.o 00:02:06.170 CC module/vfu_device/vfu_virtio_fs.o 00:02:06.170 LIB libspdk_env_dpdk_rpc.a 00:02:06.170 CC module/accel/ioat/accel_ioat_rpc.o 00:02:06.170 CC module/accel/ioat/accel_ioat.o 00:02:06.170 CC module/accel/iaa/accel_iaa.o 00:02:06.170 CC module/accel/iaa/accel_iaa_rpc.o 00:02:06.170 CC module/accel/error/accel_error.o 00:02:06.170 CC module/keyring/file/keyring.o 00:02:06.170 CC module/accel/error/accel_error_rpc.o 00:02:06.170 CC module/keyring/file/keyring_rpc.o 00:02:06.170 CC module/accel/dsa/accel_dsa.o 00:02:06.170 CC module/accel/dsa/accel_dsa_rpc.o 00:02:06.170 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:06.170 CC module/scheduler/gscheduler/gscheduler.o 00:02:06.170 CC module/sock/posix/posix.o 00:02:06.170 CC module/keyring/linux/keyring.o 00:02:06.170 CC module/fsdev/aio/fsdev_aio.o 00:02:06.170 CC module/keyring/linux/keyring_rpc.o 00:02:06.170 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:06.170 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:06.170 CC module/blob/bdev/blob_bdev.o 00:02:06.170 CC module/fsdev/aio/linux_aio_mgr.o 00:02:06.170 SO libspdk_env_dpdk_rpc.so.6.0 00:02:06.429 SYMLINK libspdk_env_dpdk_rpc.so 00:02:06.429 LIB libspdk_keyring_file.a 00:02:06.429 LIB libspdk_keyring_linux.a 00:02:06.429 LIB libspdk_scheduler_gscheduler.a 00:02:06.429 LIB libspdk_scheduler_dpdk_governor.a 00:02:06.429 SO libspdk_keyring_linux.so.1.0 00:02:06.429 SO libspdk_keyring_file.so.2.0 00:02:06.429 LIB libspdk_accel_ioat.a 00:02:06.429 SO libspdk_scheduler_gscheduler.so.4.0 00:02:06.429 LIB libspdk_accel_error.a 00:02:06.429 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:06.429 SO libspdk_accel_ioat.so.6.0 00:02:06.429 LIB libspdk_scheduler_dynamic.a 00:02:06.429 LIB libspdk_accel_iaa.a 00:02:06.429 SO libspdk_accel_error.so.2.0 00:02:06.429 SYMLINK libspdk_keyring_file.so 00:02:06.429 SYMLINK libspdk_keyring_linux.so 00:02:06.429 SYMLINK libspdk_scheduler_gscheduler.so 00:02:06.429 SO libspdk_scheduler_dynamic.so.4.0 00:02:06.429 SO libspdk_accel_iaa.so.3.0 00:02:06.429 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:06.429 LIB libspdk_blob_bdev.a 00:02:06.429 SYMLINK libspdk_accel_ioat.so 00:02:06.429 LIB libspdk_accel_dsa.a 00:02:06.429 SYMLINK libspdk_accel_error.so 00:02:06.429 SO libspdk_blob_bdev.so.11.0 00:02:06.687 SYMLINK libspdk_scheduler_dynamic.so 00:02:06.687 SO libspdk_accel_dsa.so.5.0 00:02:06.687 SYMLINK libspdk_accel_iaa.so 00:02:06.687 SYMLINK libspdk_blob_bdev.so 00:02:06.687 SYMLINK libspdk_accel_dsa.so 00:02:06.687 LIB libspdk_vfu_device.a 00:02:06.687 SO libspdk_vfu_device.so.3.0 00:02:06.687 SYMLINK libspdk_vfu_device.so 00:02:06.946 LIB libspdk_fsdev_aio.a 00:02:06.946 LIB libspdk_sock_posix.a 00:02:06.946 SO libspdk_fsdev_aio.so.1.0 00:02:06.946 SO libspdk_sock_posix.so.6.0 00:02:06.946 SYMLINK libspdk_fsdev_aio.so 00:02:06.946 SYMLINK libspdk_sock_posix.so 00:02:06.946 CC module/bdev/error/vbdev_error.o 00:02:06.946 CC module/bdev/error/vbdev_error_rpc.o 00:02:06.946 CC module/bdev/nvme/bdev_nvme.o 00:02:06.946 CC module/bdev/delay/vbdev_delay.o 00:02:06.946 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:06.946 CC module/bdev/nvme/bdev_mdns_client.o 00:02:06.946 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:06.946 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:06.946 CC module/bdev/passthru/vbdev_passthru.o 00:02:06.946 CC module/bdev/nvme/nvme_rpc.o 00:02:06.946 CC module/bdev/nvme/vbdev_opal.o 00:02:07.204 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:07.204 CC module/bdev/gpt/vbdev_gpt.o 00:02:07.204 CC module/bdev/gpt/gpt.o 00:02:07.204 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:07.204 CC module/bdev/split/vbdev_split.o 00:02:07.204 CC module/bdev/split/vbdev_split_rpc.o 00:02:07.204 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:07.204 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:07.204 CC module/bdev/aio/bdev_aio_rpc.o 00:02:07.204 CC module/bdev/aio/bdev_aio.o 00:02:07.204 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:07.204 CC module/blobfs/bdev/blobfs_bdev.o 00:02:07.204 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:07.204 CC module/bdev/ftl/bdev_ftl.o 00:02:07.204 CC module/bdev/malloc/bdev_malloc.o 00:02:07.204 CC module/bdev/lvol/vbdev_lvol.o 00:02:07.204 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:07.204 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:07.204 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:07.204 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:07.204 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:07.204 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:07.204 CC module/bdev/null/bdev_null.o 00:02:07.204 CC module/bdev/iscsi/bdev_iscsi.o 00:02:07.204 CC module/bdev/null/bdev_null_rpc.o 00:02:07.204 CC module/bdev/raid/bdev_raid.o 00:02:07.204 CC module/bdev/raid/bdev_raid_rpc.o 00:02:07.204 CC module/bdev/raid/bdev_raid_sb.o 00:02:07.204 CC module/bdev/raid/raid1.o 00:02:07.204 CC module/bdev/raid/raid0.o 00:02:07.204 CC module/bdev/raid/concat.o 00:02:07.462 LIB libspdk_blobfs_bdev.a 00:02:07.462 LIB libspdk_bdev_error.a 00:02:07.462 SO libspdk_blobfs_bdev.so.6.0 00:02:07.462 LIB libspdk_bdev_split.a 00:02:07.462 SO libspdk_bdev_error.so.6.0 00:02:07.462 LIB libspdk_bdev_null.a 00:02:07.462 SO libspdk_bdev_null.so.6.0 00:02:07.462 SO libspdk_bdev_split.so.6.0 00:02:07.462 LIB libspdk_bdev_ftl.a 00:02:07.462 LIB libspdk_bdev_gpt.a 00:02:07.462 LIB libspdk_bdev_zone_block.a 00:02:07.462 SYMLINK libspdk_blobfs_bdev.so 00:02:07.462 LIB libspdk_bdev_passthru.a 00:02:07.462 LIB libspdk_bdev_aio.a 00:02:07.462 SYMLINK libspdk_bdev_error.so 00:02:07.462 SO libspdk_bdev_ftl.so.6.0 00:02:07.462 SO libspdk_bdev_passthru.so.6.0 00:02:07.462 SO libspdk_bdev_gpt.so.6.0 00:02:07.462 SO libspdk_bdev_zone_block.so.6.0 00:02:07.462 LIB libspdk_bdev_delay.a 00:02:07.462 SYMLINK libspdk_bdev_split.so 00:02:07.462 SO libspdk_bdev_aio.so.6.0 00:02:07.462 SYMLINK libspdk_bdev_null.so 00:02:07.462 LIB libspdk_bdev_malloc.a 00:02:07.462 LIB libspdk_bdev_iscsi.a 00:02:07.462 SO libspdk_bdev_delay.so.6.0 00:02:07.462 SYMLINK libspdk_bdev_passthru.so 00:02:07.462 SYMLINK libspdk_bdev_ftl.so 00:02:07.462 SYMLINK libspdk_bdev_gpt.so 00:02:07.462 SYMLINK libspdk_bdev_zone_block.so 00:02:07.462 SO libspdk_bdev_malloc.so.6.0 00:02:07.462 SYMLINK libspdk_bdev_aio.so 00:02:07.462 SO libspdk_bdev_iscsi.so.6.0 00:02:07.720 SYMLINK libspdk_bdev_delay.so 00:02:07.720 SYMLINK libspdk_bdev_malloc.so 00:02:07.720 LIB libspdk_bdev_lvol.a 00:02:07.720 LIB libspdk_bdev_virtio.a 00:02:07.720 SYMLINK libspdk_bdev_iscsi.so 00:02:07.720 SO libspdk_bdev_lvol.so.6.0 00:02:07.720 SO libspdk_bdev_virtio.so.6.0 00:02:07.720 SYMLINK libspdk_bdev_lvol.so 00:02:07.720 SYMLINK libspdk_bdev_virtio.so 00:02:07.979 LIB libspdk_bdev_raid.a 00:02:07.979 SO libspdk_bdev_raid.so.6.0 00:02:07.979 SYMLINK libspdk_bdev_raid.so 00:02:08.916 LIB libspdk_bdev_nvme.a 00:02:08.916 SO libspdk_bdev_nvme.so.7.1 00:02:09.176 SYMLINK libspdk_bdev_nvme.so 00:02:09.743 CC module/event/subsystems/vmd/vmd.o 00:02:09.743 CC module/event/subsystems/iobuf/iobuf.o 00:02:09.743 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:09.743 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:09.743 CC module/event/subsystems/scheduler/scheduler.o 00:02:09.743 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:09.743 CC module/event/subsystems/sock/sock.o 00:02:09.743 CC module/event/subsystems/keyring/keyring.o 00:02:09.743 CC module/event/subsystems/fsdev/fsdev.o 00:02:09.743 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:10.003 LIB libspdk_event_keyring.a 00:02:10.003 LIB libspdk_event_vmd.a 00:02:10.003 LIB libspdk_event_iobuf.a 00:02:10.003 LIB libspdk_event_vfu_tgt.a 00:02:10.003 LIB libspdk_event_scheduler.a 00:02:10.003 LIB libspdk_event_vhost_blk.a 00:02:10.003 LIB libspdk_event_sock.a 00:02:10.003 LIB libspdk_event_fsdev.a 00:02:10.003 SO libspdk_event_keyring.so.1.0 00:02:10.003 SO libspdk_event_vmd.so.6.0 00:02:10.003 SO libspdk_event_vfu_tgt.so.3.0 00:02:10.003 SO libspdk_event_scheduler.so.4.0 00:02:10.003 SO libspdk_event_iobuf.so.3.0 00:02:10.003 SO libspdk_event_vhost_blk.so.3.0 00:02:10.003 SO libspdk_event_fsdev.so.1.0 00:02:10.003 SO libspdk_event_sock.so.5.0 00:02:10.003 SYMLINK libspdk_event_keyring.so 00:02:10.003 SYMLINK libspdk_event_vfu_tgt.so 00:02:10.003 SYMLINK libspdk_event_vmd.so 00:02:10.003 SYMLINK libspdk_event_scheduler.so 00:02:10.003 SYMLINK libspdk_event_fsdev.so 00:02:10.003 SYMLINK libspdk_event_vhost_blk.so 00:02:10.003 SYMLINK libspdk_event_iobuf.so 00:02:10.003 SYMLINK libspdk_event_sock.so 00:02:10.261 CC module/event/subsystems/accel/accel.o 00:02:10.520 LIB libspdk_event_accel.a 00:02:10.520 SO libspdk_event_accel.so.6.0 00:02:10.520 SYMLINK libspdk_event_accel.so 00:02:10.779 CC module/event/subsystems/bdev/bdev.o 00:02:11.038 LIB libspdk_event_bdev.a 00:02:11.038 SO libspdk_event_bdev.so.6.0 00:02:11.038 SYMLINK libspdk_event_bdev.so 00:02:11.603 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:11.603 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:11.603 CC module/event/subsystems/ublk/ublk.o 00:02:11.603 CC module/event/subsystems/scsi/scsi.o 00:02:11.603 CC module/event/subsystems/nbd/nbd.o 00:02:11.603 LIB libspdk_event_ublk.a 00:02:11.603 LIB libspdk_event_nbd.a 00:02:11.603 LIB libspdk_event_scsi.a 00:02:11.603 SO libspdk_event_ublk.so.3.0 00:02:11.603 SO libspdk_event_nbd.so.6.0 00:02:11.603 SO libspdk_event_scsi.so.6.0 00:02:11.603 LIB libspdk_event_nvmf.a 00:02:11.603 SO libspdk_event_nvmf.so.6.0 00:02:11.603 SYMLINK libspdk_event_ublk.so 00:02:11.603 SYMLINK libspdk_event_nbd.so 00:02:11.603 SYMLINK libspdk_event_scsi.so 00:02:11.862 SYMLINK libspdk_event_nvmf.so 00:02:12.121 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.122 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.122 LIB libspdk_event_vhost_scsi.a 00:02:12.122 LIB libspdk_event_iscsi.a 00:02:12.122 SO libspdk_event_vhost_scsi.so.3.0 00:02:12.122 SO libspdk_event_iscsi.so.6.0 00:02:12.122 SYMLINK libspdk_event_vhost_scsi.so 00:02:12.381 SYMLINK libspdk_event_iscsi.so 00:02:12.381 SO libspdk.so.6.0 00:02:12.381 SYMLINK libspdk.so 00:02:12.639 CC app/trace_record/trace_record.o 00:02:12.906 CC app/spdk_nvme_perf/perf.o 00:02:12.906 CXX app/trace/trace.o 00:02:12.906 CC app/spdk_top/spdk_top.o 00:02:12.906 TEST_HEADER include/spdk/accel.h 00:02:12.906 CC app/spdk_nvme_identify/identify.o 00:02:12.906 TEST_HEADER include/spdk/barrier.h 00:02:12.906 TEST_HEADER include/spdk/accel_module.h 00:02:12.906 TEST_HEADER include/spdk/assert.h 00:02:12.906 CC test/rpc_client/rpc_client_test.o 00:02:12.906 TEST_HEADER include/spdk/base64.h 00:02:12.906 CC app/spdk_lspci/spdk_lspci.o 00:02:12.906 TEST_HEADER include/spdk/bdev_module.h 00:02:12.906 TEST_HEADER include/spdk/bdev.h 00:02:12.906 TEST_HEADER include/spdk/bit_array.h 00:02:12.906 TEST_HEADER include/spdk/bit_pool.h 00:02:12.906 TEST_HEADER include/spdk/bdev_zone.h 00:02:12.906 TEST_HEADER include/spdk/blob_bdev.h 00:02:12.906 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:12.907 TEST_HEADER include/spdk/blobfs.h 00:02:12.907 TEST_HEADER include/spdk/blob.h 00:02:12.907 TEST_HEADER include/spdk/conf.h 00:02:12.907 TEST_HEADER include/spdk/config.h 00:02:12.907 TEST_HEADER include/spdk/cpuset.h 00:02:12.907 TEST_HEADER include/spdk/crc16.h 00:02:12.907 TEST_HEADER include/spdk/crc64.h 00:02:12.907 TEST_HEADER include/spdk/crc32.h 00:02:12.907 TEST_HEADER include/spdk/dif.h 00:02:12.907 TEST_HEADER include/spdk/endian.h 00:02:12.907 TEST_HEADER include/spdk/env_dpdk.h 00:02:12.907 TEST_HEADER include/spdk/env.h 00:02:12.907 TEST_HEADER include/spdk/dma.h 00:02:12.907 TEST_HEADER include/spdk/fd_group.h 00:02:12.907 TEST_HEADER include/spdk/fd.h 00:02:12.907 TEST_HEADER include/spdk/event.h 00:02:12.907 CC app/spdk_nvme_discover/discovery_aer.o 00:02:12.907 TEST_HEADER include/spdk/fsdev.h 00:02:12.907 TEST_HEADER include/spdk/file.h 00:02:12.907 TEST_HEADER include/spdk/fsdev_module.h 00:02:12.907 TEST_HEADER include/spdk/ftl.h 00:02:12.907 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:12.907 TEST_HEADER include/spdk/gpt_spec.h 00:02:12.907 TEST_HEADER include/spdk/hexlify.h 00:02:12.907 TEST_HEADER include/spdk/histogram_data.h 00:02:12.907 TEST_HEADER include/spdk/idxd.h 00:02:12.907 TEST_HEADER include/spdk/idxd_spec.h 00:02:12.907 TEST_HEADER include/spdk/init.h 00:02:12.907 TEST_HEADER include/spdk/ioat.h 00:02:12.907 TEST_HEADER include/spdk/ioat_spec.h 00:02:12.907 TEST_HEADER include/spdk/iscsi_spec.h 00:02:12.907 TEST_HEADER include/spdk/json.h 00:02:12.907 TEST_HEADER include/spdk/keyring.h 00:02:12.907 TEST_HEADER include/spdk/jsonrpc.h 00:02:12.907 TEST_HEADER include/spdk/keyring_module.h 00:02:12.907 TEST_HEADER include/spdk/likely.h 00:02:12.907 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:12.907 TEST_HEADER include/spdk/log.h 00:02:12.907 TEST_HEADER include/spdk/md5.h 00:02:12.907 TEST_HEADER include/spdk/memory.h 00:02:12.907 TEST_HEADER include/spdk/lvol.h 00:02:12.907 TEST_HEADER include/spdk/mmio.h 00:02:12.907 TEST_HEADER include/spdk/nbd.h 00:02:12.907 CC app/nvmf_tgt/nvmf_main.o 00:02:12.907 TEST_HEADER include/spdk/notify.h 00:02:12.907 TEST_HEADER include/spdk/net.h 00:02:12.907 TEST_HEADER include/spdk/nvme.h 00:02:12.907 TEST_HEADER include/spdk/nvme_intel.h 00:02:12.907 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:12.907 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:12.907 TEST_HEADER include/spdk/nvme_zns.h 00:02:12.907 TEST_HEADER include/spdk/nvme_spec.h 00:02:12.907 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:12.907 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:12.907 TEST_HEADER include/spdk/nvmf_spec.h 00:02:12.907 TEST_HEADER include/spdk/nvmf.h 00:02:12.907 TEST_HEADER include/spdk/opal.h 00:02:12.907 TEST_HEADER include/spdk/opal_spec.h 00:02:12.907 TEST_HEADER include/spdk/pci_ids.h 00:02:12.907 TEST_HEADER include/spdk/nvmf_transport.h 00:02:12.907 CC app/iscsi_tgt/iscsi_tgt.o 00:02:12.907 TEST_HEADER include/spdk/pipe.h 00:02:12.907 TEST_HEADER include/spdk/queue.h 00:02:12.907 TEST_HEADER include/spdk/reduce.h 00:02:12.907 CC app/spdk_dd/spdk_dd.o 00:02:12.907 TEST_HEADER include/spdk/rpc.h 00:02:12.907 TEST_HEADER include/spdk/scheduler.h 00:02:12.907 TEST_HEADER include/spdk/scsi.h 00:02:12.907 TEST_HEADER include/spdk/sock.h 00:02:12.907 TEST_HEADER include/spdk/scsi_spec.h 00:02:12.907 TEST_HEADER include/spdk/string.h 00:02:12.907 TEST_HEADER include/spdk/stdinc.h 00:02:12.907 TEST_HEADER include/spdk/thread.h 00:02:12.907 TEST_HEADER include/spdk/trace_parser.h 00:02:12.907 TEST_HEADER include/spdk/trace.h 00:02:12.907 TEST_HEADER include/spdk/ublk.h 00:02:12.907 TEST_HEADER include/spdk/tree.h 00:02:12.907 TEST_HEADER include/spdk/version.h 00:02:12.907 TEST_HEADER include/spdk/util.h 00:02:12.907 TEST_HEADER include/spdk/uuid.h 00:02:12.907 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:12.907 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:12.907 TEST_HEADER include/spdk/vmd.h 00:02:12.907 TEST_HEADER include/spdk/xor.h 00:02:12.907 TEST_HEADER include/spdk/zipf.h 00:02:12.907 TEST_HEADER include/spdk/vhost.h 00:02:12.907 CXX test/cpp_headers/accel.o 00:02:12.907 CXX test/cpp_headers/accel_module.o 00:02:12.907 CXX test/cpp_headers/base64.o 00:02:12.907 CXX test/cpp_headers/barrier.o 00:02:12.907 CXX test/cpp_headers/assert.o 00:02:12.907 CXX test/cpp_headers/bdev.o 00:02:12.907 CXX test/cpp_headers/bdev_zone.o 00:02:12.907 CC app/spdk_tgt/spdk_tgt.o 00:02:12.907 CXX test/cpp_headers/bdev_module.o 00:02:12.907 CXX test/cpp_headers/bit_array.o 00:02:12.907 CXX test/cpp_headers/bit_pool.o 00:02:12.907 CXX test/cpp_headers/blob_bdev.o 00:02:12.907 CXX test/cpp_headers/blobfs_bdev.o 00:02:12.907 CXX test/cpp_headers/blobfs.o 00:02:12.907 CXX test/cpp_headers/blob.o 00:02:12.907 CXX test/cpp_headers/conf.o 00:02:12.907 CXX test/cpp_headers/cpuset.o 00:02:12.907 CXX test/cpp_headers/config.o 00:02:12.907 CXX test/cpp_headers/crc32.o 00:02:12.907 CXX test/cpp_headers/dif.o 00:02:12.907 CXX test/cpp_headers/crc64.o 00:02:12.907 CXX test/cpp_headers/dma.o 00:02:12.907 CXX test/cpp_headers/crc16.o 00:02:12.907 CXX test/cpp_headers/endian.o 00:02:12.907 CXX test/cpp_headers/env.o 00:02:12.907 CXX test/cpp_headers/env_dpdk.o 00:02:12.907 CXX test/cpp_headers/event.o 00:02:12.907 CXX test/cpp_headers/fd_group.o 00:02:12.907 CXX test/cpp_headers/file.o 00:02:12.907 CXX test/cpp_headers/fd.o 00:02:12.907 CXX test/cpp_headers/fsdev.o 00:02:12.907 CXX test/cpp_headers/fsdev_module.o 00:02:12.907 CXX test/cpp_headers/ftl.o 00:02:12.907 CXX test/cpp_headers/fuse_dispatcher.o 00:02:12.907 CXX test/cpp_headers/hexlify.o 00:02:12.907 CXX test/cpp_headers/gpt_spec.o 00:02:12.907 CXX test/cpp_headers/histogram_data.o 00:02:12.907 CXX test/cpp_headers/idxd.o 00:02:12.907 CXX test/cpp_headers/idxd_spec.o 00:02:12.907 CXX test/cpp_headers/ioat.o 00:02:12.907 CXX test/cpp_headers/init.o 00:02:12.907 CXX test/cpp_headers/iscsi_spec.o 00:02:12.907 CXX test/cpp_headers/ioat_spec.o 00:02:12.907 CXX test/cpp_headers/json.o 00:02:12.907 CXX test/cpp_headers/keyring_module.o 00:02:12.907 CXX test/cpp_headers/jsonrpc.o 00:02:12.907 CXX test/cpp_headers/likely.o 00:02:12.907 CXX test/cpp_headers/keyring.o 00:02:12.907 CXX test/cpp_headers/log.o 00:02:12.907 CXX test/cpp_headers/lvol.o 00:02:12.907 CXX test/cpp_headers/memory.o 00:02:12.907 CXX test/cpp_headers/md5.o 00:02:12.907 CXX test/cpp_headers/mmio.o 00:02:12.907 CXX test/cpp_headers/nbd.o 00:02:12.907 CXX test/cpp_headers/notify.o 00:02:12.907 CXX test/cpp_headers/nvme_intel.o 00:02:12.907 CXX test/cpp_headers/net.o 00:02:12.907 CXX test/cpp_headers/nvme_ocssd.o 00:02:12.907 CXX test/cpp_headers/nvme.o 00:02:12.907 CXX test/cpp_headers/nvme_spec.o 00:02:12.907 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:12.907 CXX test/cpp_headers/nvme_zns.o 00:02:12.907 CXX test/cpp_headers/nvmf_cmd.o 00:02:12.907 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:12.907 CXX test/cpp_headers/nvmf.o 00:02:12.907 CXX test/cpp_headers/nvmf_spec.o 00:02:12.907 CXX test/cpp_headers/nvmf_transport.o 00:02:12.907 CXX test/cpp_headers/opal.o 00:02:12.907 CC test/env/vtophys/vtophys.o 00:02:12.907 CC test/app/jsoncat/jsoncat.o 00:02:12.907 CC test/app/histogram_perf/histogram_perf.o 00:02:12.907 CC test/app/stub/stub.o 00:02:12.907 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:12.907 CC test/thread/poller_perf/poller_perf.o 00:02:12.907 CC examples/ioat/perf/perf.o 00:02:12.907 CC test/app/bdev_svc/bdev_svc.o 00:02:12.907 CC examples/util/zipf/zipf.o 00:02:12.907 CC test/env/memory/memory_ut.o 00:02:13.178 CC examples/ioat/verify/verify.o 00:02:13.178 CC test/env/pci/pci_ut.o 00:02:13.178 CC app/fio/nvme/fio_plugin.o 00:02:13.178 CC test/dma/test_dma/test_dma.o 00:02:13.178 CC app/fio/bdev/fio_plugin.o 00:02:13.178 LINK spdk_lspci 00:02:13.178 LINK spdk_trace_record 00:02:13.443 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:13.443 LINK nvmf_tgt 00:02:13.443 LINK spdk_nvme_discover 00:02:13.443 LINK rpc_client_test 00:02:13.443 LINK interrupt_tgt 00:02:13.443 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:13.443 CC test/env/mem_callbacks/mem_callbacks.o 00:02:13.443 LINK vtophys 00:02:13.443 LINK jsoncat 00:02:13.443 LINK env_dpdk_post_init 00:02:13.443 LINK iscsi_tgt 00:02:13.443 LINK bdev_svc 00:02:13.443 CXX test/cpp_headers/opal_spec.o 00:02:13.443 CXX test/cpp_headers/pci_ids.o 00:02:13.443 CXX test/cpp_headers/pipe.o 00:02:13.443 CXX test/cpp_headers/queue.o 00:02:13.443 LINK stub 00:02:13.443 CXX test/cpp_headers/reduce.o 00:02:13.443 CXX test/cpp_headers/rpc.o 00:02:13.704 CXX test/cpp_headers/scheduler.o 00:02:13.704 CXX test/cpp_headers/scsi.o 00:02:13.704 CXX test/cpp_headers/scsi_spec.o 00:02:13.704 CXX test/cpp_headers/sock.o 00:02:13.704 CXX test/cpp_headers/stdinc.o 00:02:13.704 CXX test/cpp_headers/thread.o 00:02:13.704 CXX test/cpp_headers/trace.o 00:02:13.704 CXX test/cpp_headers/string.o 00:02:13.704 CXX test/cpp_headers/trace_parser.o 00:02:13.704 CXX test/cpp_headers/tree.o 00:02:13.704 CXX test/cpp_headers/ublk.o 00:02:13.705 CXX test/cpp_headers/util.o 00:02:13.705 CXX test/cpp_headers/uuid.o 00:02:13.705 LINK histogram_perf 00:02:13.705 CXX test/cpp_headers/version.o 00:02:13.705 CXX test/cpp_headers/vfio_user_pci.o 00:02:13.705 CXX test/cpp_headers/vfio_user_spec.o 00:02:13.705 CXX test/cpp_headers/vhost.o 00:02:13.705 CXX test/cpp_headers/vmd.o 00:02:13.705 CXX test/cpp_headers/xor.o 00:02:13.705 LINK verify 00:02:13.705 CXX test/cpp_headers/zipf.o 00:02:13.705 LINK poller_perf 00:02:13.705 LINK zipf 00:02:13.705 LINK ioat_perf 00:02:13.705 LINK spdk_tgt 00:02:13.705 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:13.705 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:13.705 LINK spdk_dd 00:02:13.705 LINK spdk_trace 00:02:13.963 LINK pci_ut 00:02:13.963 LINK nvme_fuzz 00:02:13.963 LINK spdk_bdev 00:02:13.963 LINK spdk_nvme 00:02:14.222 CC app/vhost/vhost.o 00:02:14.222 CC test/event/reactor_perf/reactor_perf.o 00:02:14.222 LINK test_dma 00:02:14.222 CC test/event/event_perf/event_perf.o 00:02:14.222 CC test/event/reactor/reactor.o 00:02:14.222 CC examples/idxd/perf/perf.o 00:02:14.222 CC examples/sock/hello_world/hello_sock.o 00:02:14.222 CC examples/vmd/lsvmd/lsvmd.o 00:02:14.222 CC examples/vmd/led/led.o 00:02:14.222 CC test/event/app_repeat/app_repeat.o 00:02:14.222 LINK vhost_fuzz 00:02:14.222 CC test/event/scheduler/scheduler.o 00:02:14.222 CC examples/thread/thread/thread_ex.o 00:02:14.222 LINK spdk_nvme_identify 00:02:14.222 LINK spdk_nvme_perf 00:02:14.222 LINK spdk_top 00:02:14.222 LINK reactor 00:02:14.222 LINK mem_callbacks 00:02:14.222 LINK reactor_perf 00:02:14.222 LINK lsvmd 00:02:14.222 LINK event_perf 00:02:14.222 LINK led 00:02:14.222 LINK app_repeat 00:02:14.222 LINK vhost 00:02:14.480 LINK hello_sock 00:02:14.480 LINK scheduler 00:02:14.480 LINK thread 00:02:14.480 LINK idxd_perf 00:02:14.480 LINK memory_ut 00:02:14.480 CC test/nvme/aer/aer.o 00:02:14.480 CC test/nvme/reserve/reserve.o 00:02:14.480 CC test/nvme/overhead/overhead.o 00:02:14.480 CC test/nvme/connect_stress/connect_stress.o 00:02:14.480 CC test/nvme/startup/startup.o 00:02:14.480 CC test/nvme/reset/reset.o 00:02:14.480 CC test/nvme/cuse/cuse.o 00:02:14.480 CC test/nvme/boot_partition/boot_partition.o 00:02:14.480 CC test/nvme/sgl/sgl.o 00:02:14.480 CC test/nvme/err_injection/err_injection.o 00:02:14.480 CC test/nvme/e2edp/nvme_dp.o 00:02:14.480 CC test/nvme/fdp/fdp.o 00:02:14.480 CC test/nvme/compliance/nvme_compliance.o 00:02:14.480 CC test/nvme/simple_copy/simple_copy.o 00:02:14.738 CC test/nvme/fused_ordering/fused_ordering.o 00:02:14.738 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:14.738 CC test/accel/dif/dif.o 00:02:14.738 CC test/blobfs/mkfs/mkfs.o 00:02:14.738 CC test/lvol/esnap/esnap.o 00:02:14.738 LINK boot_partition 00:02:14.738 LINK reserve 00:02:14.738 CC examples/nvme/hello_world/hello_world.o 00:02:14.738 CC examples/nvme/hotplug/hotplug.o 00:02:14.738 CC examples/nvme/arbitration/arbitration.o 00:02:14.738 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:14.738 CC examples/nvme/reconnect/reconnect.o 00:02:14.738 LINK connect_stress 00:02:14.738 LINK doorbell_aers 00:02:14.738 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:14.738 LINK startup 00:02:14.738 CC examples/nvme/abort/abort.o 00:02:14.738 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:14.738 LINK err_injection 00:02:14.738 LINK fused_ordering 00:02:14.738 LINK simple_copy 00:02:14.996 LINK aer 00:02:14.996 LINK reset 00:02:14.996 LINK nvme_dp 00:02:14.996 LINK overhead 00:02:14.996 LINK mkfs 00:02:14.996 LINK sgl 00:02:14.996 CC examples/accel/perf/accel_perf.o 00:02:14.996 LINK fdp 00:02:14.996 LINK nvme_compliance 00:02:14.996 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:14.996 CC examples/blob/hello_world/hello_blob.o 00:02:14.996 CC examples/blob/cli/blobcli.o 00:02:14.996 LINK cmb_copy 00:02:14.996 LINK pmr_persistence 00:02:14.996 LINK hello_world 00:02:14.996 LINK iscsi_fuzz 00:02:14.996 LINK hotplug 00:02:15.254 LINK arbitration 00:02:15.254 LINK abort 00:02:15.254 LINK reconnect 00:02:15.254 LINK hello_blob 00:02:15.254 LINK dif 00:02:15.254 LINK nvme_manage 00:02:15.254 LINK hello_fsdev 00:02:15.254 LINK accel_perf 00:02:15.514 LINK blobcli 00:02:15.772 LINK cuse 00:02:15.772 CC test/bdev/bdevio/bdevio.o 00:02:15.772 CC examples/bdev/hello_world/hello_bdev.o 00:02:15.772 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.031 LINK hello_bdev 00:02:16.031 LINK bdevio 00:02:16.598 LINK bdevperf 00:02:16.857 CC examples/nvmf/nvmf/nvmf.o 00:02:17.115 LINK nvmf 00:02:18.492 LINK esnap 00:02:18.492 00:02:18.492 real 0m55.841s 00:02:18.492 user 8m19.052s 00:02:18.492 sys 3m46.063s 00:02:18.492 18:39:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:18.492 18:39:40 make -- common/autotest_common.sh@10 -- $ set +x 00:02:18.492 ************************************ 00:02:18.492 END TEST make 00:02:18.492 ************************************ 00:02:18.751 18:39:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:18.751 18:39:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:18.751 18:39:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:18.751 18:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.751 18:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:18.751 18:39:40 -- pm/common@44 -- $ pid=3366881 00:02:18.751 18:39:40 -- pm/common@50 -- $ kill -TERM 3366881 00:02:18.751 18:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.751 18:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:18.751 18:39:40 -- pm/common@44 -- $ pid=3366883 00:02:18.751 18:39:40 -- pm/common@50 -- $ kill -TERM 3366883 00:02:18.751 18:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.751 18:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:18.751 18:39:40 -- pm/common@44 -- $ pid=3366884 00:02:18.751 18:39:40 -- pm/common@50 -- $ kill -TERM 3366884 00:02:18.751 18:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.751 18:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:18.751 18:39:40 -- pm/common@44 -- $ pid=3366910 00:02:18.751 18:39:40 -- pm/common@50 -- $ sudo -E kill -TERM 3366910 00:02:18.751 18:39:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:18.751 18:39:40 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.751 18:39:40 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:18.751 18:39:40 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:18.751 18:39:40 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:18.751 18:39:41 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:18.751 18:39:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:18.751 18:39:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:18.751 18:39:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:18.751 18:39:41 -- scripts/common.sh@336 -- # IFS=.-: 00:02:18.751 18:39:41 -- scripts/common.sh@336 -- # read -ra ver1 00:02:18.751 18:39:41 -- scripts/common.sh@337 -- # IFS=.-: 00:02:18.751 18:39:41 -- scripts/common.sh@337 -- # read -ra ver2 00:02:18.751 18:39:41 -- scripts/common.sh@338 -- # local 'op=<' 00:02:18.751 18:39:41 -- scripts/common.sh@340 -- # ver1_l=2 00:02:18.751 18:39:41 -- scripts/common.sh@341 -- # ver2_l=1 00:02:18.751 18:39:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:18.751 18:39:41 -- scripts/common.sh@344 -- # case "$op" in 00:02:18.751 18:39:41 -- scripts/common.sh@345 -- # : 1 00:02:18.751 18:39:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:18.751 18:39:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:18.751 18:39:41 -- scripts/common.sh@365 -- # decimal 1 00:02:18.751 18:39:41 -- scripts/common.sh@353 -- # local d=1 00:02:18.751 18:39:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:18.751 18:39:41 -- scripts/common.sh@355 -- # echo 1 00:02:18.751 18:39:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:18.751 18:39:41 -- scripts/common.sh@366 -- # decimal 2 00:02:18.751 18:39:41 -- scripts/common.sh@353 -- # local d=2 00:02:18.751 18:39:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:18.751 18:39:41 -- scripts/common.sh@355 -- # echo 2 00:02:18.751 18:39:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:18.751 18:39:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:18.751 18:39:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:18.751 18:39:41 -- scripts/common.sh@368 -- # return 0 00:02:18.751 18:39:41 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:18.751 18:39:41 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:18.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.751 --rc genhtml_branch_coverage=1 00:02:18.751 --rc genhtml_function_coverage=1 00:02:18.751 --rc genhtml_legend=1 00:02:18.751 --rc geninfo_all_blocks=1 00:02:18.751 --rc geninfo_unexecuted_blocks=1 00:02:18.751 00:02:18.751 ' 00:02:18.751 18:39:41 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:18.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.751 --rc genhtml_branch_coverage=1 00:02:18.751 --rc genhtml_function_coverage=1 00:02:18.751 --rc genhtml_legend=1 00:02:18.751 --rc geninfo_all_blocks=1 00:02:18.751 --rc geninfo_unexecuted_blocks=1 00:02:18.751 00:02:18.751 ' 00:02:18.751 18:39:41 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:18.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.751 --rc genhtml_branch_coverage=1 00:02:18.751 --rc genhtml_function_coverage=1 00:02:18.751 --rc genhtml_legend=1 00:02:18.751 --rc geninfo_all_blocks=1 00:02:18.751 --rc geninfo_unexecuted_blocks=1 00:02:18.751 00:02:18.751 ' 00:02:18.751 18:39:41 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:18.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.751 --rc genhtml_branch_coverage=1 00:02:18.751 --rc genhtml_function_coverage=1 00:02:18.751 --rc genhtml_legend=1 00:02:18.751 --rc geninfo_all_blocks=1 00:02:18.751 --rc geninfo_unexecuted_blocks=1 00:02:18.751 00:02:18.751 ' 00:02:18.751 18:39:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:18.751 18:39:41 -- nvmf/common.sh@7 -- # uname -s 00:02:18.751 18:39:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:18.751 18:39:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:18.751 18:39:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:18.751 18:39:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:18.751 18:39:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:18.751 18:39:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:18.751 18:39:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:18.751 18:39:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:18.751 18:39:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:18.751 18:39:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:18.751 18:39:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:18.751 18:39:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:18.751 18:39:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:18.751 18:39:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:18.751 18:39:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:18.751 18:39:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:18.751 18:39:41 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:18.751 18:39:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:18.751 18:39:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:18.751 18:39:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.751 18:39:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.751 18:39:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.751 18:39:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.751 18:39:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.751 18:39:41 -- paths/export.sh@5 -- # export PATH 00:02:18.751 18:39:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.751 18:39:41 -- nvmf/common.sh@51 -- # : 0 00:02:18.751 18:39:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:18.751 18:39:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:18.751 18:39:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:18.751 18:39:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:18.752 18:39:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:18.752 18:39:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:18.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:18.752 18:39:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:18.752 18:39:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:18.752 18:39:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:19.011 18:39:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:19.011 18:39:41 -- spdk/autotest.sh@32 -- # uname -s 00:02:19.011 18:39:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:19.011 18:39:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:19.011 18:39:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.011 18:39:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:19.011 18:39:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.011 18:39:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:19.011 18:39:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:19.011 18:39:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:19.011 18:39:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:19.011 18:39:41 -- spdk/autotest.sh@48 -- # udevadm_pid=3429876 00:02:19.011 18:39:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:19.011 18:39:41 -- pm/common@17 -- # local monitor 00:02:19.011 18:39:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.011 18:39:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.011 18:39:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.011 18:39:41 -- pm/common@21 -- # date +%s 00:02:19.011 18:39:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.011 18:39:41 -- pm/common@21 -- # date +%s 00:02:19.011 18:39:41 -- pm/common@25 -- # sleep 1 00:02:19.011 18:39:41 -- pm/common@21 -- # date +%s 00:02:19.011 18:39:41 -- pm/common@21 -- # date +%s 00:02:19.011 18:39:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732124381 00:02:19.011 18:39:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732124381 00:02:19.011 18:39:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732124381 00:02:19.011 18:39:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732124381 00:02:19.011 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732124381_collect-cpu-load.pm.log 00:02:19.011 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732124381_collect-vmstat.pm.log 00:02:19.011 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732124381_collect-cpu-temp.pm.log 00:02:19.011 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732124381_collect-bmc-pm.bmc.pm.log 00:02:19.949 18:39:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.949 18:39:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:19.949 18:39:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:19.949 18:39:42 -- common/autotest_common.sh@10 -- # set +x 00:02:19.949 18:39:42 -- spdk/autotest.sh@59 -- # create_test_list 00:02:19.949 18:39:42 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:19.949 18:39:42 -- common/autotest_common.sh@10 -- # set +x 00:02:19.949 18:39:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:19.949 18:39:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.949 18:39:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.949 18:39:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:19.949 18:39:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.949 18:39:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:19.949 18:39:42 -- common/autotest_common.sh@1457 -- # uname 00:02:19.949 18:39:42 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:19.949 18:39:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:19.949 18:39:42 -- common/autotest_common.sh@1477 -- # uname 00:02:19.949 18:39:42 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:19.949 18:39:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:19.949 18:39:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:19.949 lcov: LCOV version 1.15 00:02:19.949 18:39:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:41.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.883 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:45.253 18:40:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:45.253 18:40:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:45.253 18:40:07 -- common/autotest_common.sh@10 -- # set +x 00:02:45.253 18:40:07 -- spdk/autotest.sh@78 -- # rm -f 00:02:45.253 18:40:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.789 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:47.789 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:47.789 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:48.048 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:48.048 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:48.048 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:48.048 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:48.048 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:48.048 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:48.048 18:40:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:48.048 18:40:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:48.048 18:40:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:48.048 18:40:10 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:48.048 18:40:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:48.048 18:40:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:48.048 18:40:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:48.048 18:40:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.048 18:40:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:48.048 18:40:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:48.048 18:40:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.048 18:40:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:48.048 18:40:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:48.048 18:40:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:48.048 18:40:10 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.048 No valid GPT data, bailing 00:02:48.048 18:40:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.306 18:40:10 -- scripts/common.sh@394 -- # pt= 00:02:48.306 18:40:10 -- scripts/common.sh@395 -- # return 1 00:02:48.306 18:40:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.306 1+0 records in 00:02:48.306 1+0 records out 00:02:48.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00136791 s, 767 MB/s 00:02:48.306 18:40:10 -- spdk/autotest.sh@105 -- # sync 00:02:48.306 18:40:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.306 18:40:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.306 18:40:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:53.595 18:40:15 -- spdk/autotest.sh@111 -- # uname -s 00:02:53.595 18:40:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:53.595 18:40:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:53.595 18:40:15 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:56.886 Hugepages 00:02:56.886 node hugesize free / total 00:02:56.886 node0 1048576kB 0 / 0 00:02:56.886 node0 2048kB 0 / 0 00:02:56.886 node1 1048576kB 0 / 0 00:02:56.886 node1 2048kB 0 / 0 00:02:56.886 00:02:56.886 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.886 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:56.886 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:56.887 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:56.887 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:56.887 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:56.887 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:56.887 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:56.887 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:56.887 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:56.887 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:56.887 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:56.887 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:56.887 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:56.887 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:56.887 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:56.887 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:56.887 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:56.887 18:40:18 -- spdk/autotest.sh@117 -- # uname -s 00:02:56.887 18:40:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:56.887 18:40:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:56.887 18:40:18 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.420 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:59.420 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:59.420 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:59.420 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:59.420 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:59.420 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:59.680 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:01.061 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:01.061 18:40:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:02.442 18:40:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:02.442 18:40:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:02.442 18:40:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:02.442 18:40:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:02.442 18:40:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:02.442 18:40:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:02.442 18:40:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:02.442 18:40:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:02.442 18:40:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:02.442 18:40:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:02.442 18:40:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:02.442 18:40:24 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.980 Waiting for block devices as requested 00:03:04.980 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:05.239 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:05.239 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:05.239 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:05.498 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:05.498 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:05.498 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:05.771 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:05.771 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:05.771 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:06.029 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:06.029 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:06.029 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:06.029 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:06.288 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:06.288 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:06.288 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:06.547 18:40:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:06.547 18:40:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:06.547 18:40:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:06.547 18:40:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:06.547 18:40:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:06.547 18:40:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:06.547 18:40:28 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:06.547 18:40:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:06.547 18:40:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:06.547 18:40:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:06.547 18:40:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:06.547 18:40:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:06.547 18:40:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:06.547 18:40:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:06.547 18:40:28 -- common/autotest_common.sh@1543 -- # continue 00:03:06.547 18:40:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:06.547 18:40:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:06.547 18:40:28 -- common/autotest_common.sh@10 -- # set +x 00:03:06.547 18:40:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:06.547 18:40:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:06.547 18:40:28 -- common/autotest_common.sh@10 -- # set +x 00:03:06.547 18:40:28 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.832 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:09.832 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:09.833 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:09.833 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:09.833 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:09.833 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:09.833 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.211 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.211 18:40:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:11.211 18:40:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:11.212 18:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:11.212 18:40:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:11.212 18:40:33 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:11.212 18:40:33 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:11.212 18:40:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:11.212 18:40:33 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:11.212 18:40:33 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:11.212 18:40:33 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:11.212 18:40:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:11.212 18:40:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:11.212 18:40:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:11.212 18:40:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:11.212 18:40:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:11.212 18:40:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:11.212 18:40:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:11.212 18:40:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:11.212 18:40:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:11.212 18:40:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:11.212 18:40:33 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:11.212 18:40:33 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:11.212 18:40:33 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:11.212 18:40:33 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:11.212 18:40:33 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:11.212 18:40:33 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:11.212 18:40:33 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3444339 00:03:11.212 18:40:33 -- common/autotest_common.sh@1585 -- # waitforlisten 3444339 00:03:11.212 18:40:33 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:11.212 18:40:33 -- common/autotest_common.sh@835 -- # '[' -z 3444339 ']' 00:03:11.212 18:40:33 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:11.212 18:40:33 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:11.212 18:40:33 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:11.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:11.212 18:40:33 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:11.212 18:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:11.212 [2024-11-20 18:40:33.440076] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:11.212 [2024-11-20 18:40:33.440127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3444339 ] 00:03:11.212 [2024-11-20 18:40:33.516649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:11.471 [2024-11-20 18:40:33.558208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:12.037 18:40:34 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:12.037 18:40:34 -- common/autotest_common.sh@868 -- # return 0 00:03:12.037 18:40:34 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:12.037 18:40:34 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:12.037 18:40:34 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:15.327 nvme0n1 00:03:15.327 18:40:37 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:15.327 [2024-11-20 18:40:37.450888] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:15.327 request: 00:03:15.327 { 00:03:15.327 "nvme_ctrlr_name": "nvme0", 00:03:15.327 "password": "test", 00:03:15.327 "method": "bdev_nvme_opal_revert", 00:03:15.327 "req_id": 1 00:03:15.327 } 00:03:15.327 Got JSON-RPC error response 00:03:15.327 response: 00:03:15.327 { 00:03:15.327 "code": -32602, 00:03:15.327 "message": "Invalid parameters" 00:03:15.327 } 00:03:15.327 18:40:37 -- common/autotest_common.sh@1591 -- # true 00:03:15.327 18:40:37 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:15.327 18:40:37 -- common/autotest_common.sh@1595 -- # killprocess 3444339 00:03:15.327 18:40:37 -- common/autotest_common.sh@954 -- # '[' -z 3444339 ']' 00:03:15.327 18:40:37 -- common/autotest_common.sh@958 -- # kill -0 3444339 00:03:15.327 18:40:37 -- common/autotest_common.sh@959 -- # uname 00:03:15.327 18:40:37 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:15.327 18:40:37 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3444339 00:03:15.327 18:40:37 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:15.327 18:40:37 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:15.327 18:40:37 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3444339' 00:03:15.327 killing process with pid 3444339 00:03:15.327 18:40:37 -- common/autotest_common.sh@973 -- # kill 3444339 00:03:15.327 18:40:37 -- common/autotest_common.sh@978 -- # wait 3444339 00:03:17.862 18:40:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:17.862 18:40:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:17.862 18:40:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:17.862 18:40:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:17.862 18:40:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:17.862 18:40:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:17.862 18:40:39 -- common/autotest_common.sh@10 -- # set +x 00:03:17.862 18:40:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:17.862 18:40:39 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:17.862 18:40:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.862 18:40:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.862 18:40:39 -- common/autotest_common.sh@10 -- # set +x 00:03:17.862 ************************************ 00:03:17.862 START TEST env 00:03:17.862 ************************************ 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:17.862 * Looking for test storage... 00:03:17.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:17.862 18:40:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:17.862 18:40:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:17.862 18:40:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:17.862 18:40:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:17.862 18:40:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:17.862 18:40:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:17.862 18:40:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:17.862 18:40:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:17.862 18:40:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:17.862 18:40:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:17.862 18:40:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:17.862 18:40:39 env -- scripts/common.sh@344 -- # case "$op" in 00:03:17.862 18:40:39 env -- scripts/common.sh@345 -- # : 1 00:03:17.862 18:40:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:17.862 18:40:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:17.862 18:40:39 env -- scripts/common.sh@365 -- # decimal 1 00:03:17.862 18:40:39 env -- scripts/common.sh@353 -- # local d=1 00:03:17.862 18:40:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:17.862 18:40:39 env -- scripts/common.sh@355 -- # echo 1 00:03:17.862 18:40:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:17.862 18:40:39 env -- scripts/common.sh@366 -- # decimal 2 00:03:17.862 18:40:39 env -- scripts/common.sh@353 -- # local d=2 00:03:17.862 18:40:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:17.862 18:40:39 env -- scripts/common.sh@355 -- # echo 2 00:03:17.862 18:40:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:17.862 18:40:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:17.862 18:40:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:17.862 18:40:39 env -- scripts/common.sh@368 -- # return 0 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:17.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.862 --rc genhtml_branch_coverage=1 00:03:17.862 --rc genhtml_function_coverage=1 00:03:17.862 --rc genhtml_legend=1 00:03:17.862 --rc geninfo_all_blocks=1 00:03:17.862 --rc geninfo_unexecuted_blocks=1 00:03:17.862 00:03:17.862 ' 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:17.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.862 --rc genhtml_branch_coverage=1 00:03:17.862 --rc genhtml_function_coverage=1 00:03:17.862 --rc genhtml_legend=1 00:03:17.862 --rc geninfo_all_blocks=1 00:03:17.862 --rc geninfo_unexecuted_blocks=1 00:03:17.862 00:03:17.862 ' 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:17.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.862 --rc genhtml_branch_coverage=1 00:03:17.862 --rc genhtml_function_coverage=1 00:03:17.862 --rc genhtml_legend=1 00:03:17.862 --rc geninfo_all_blocks=1 00:03:17.862 --rc geninfo_unexecuted_blocks=1 00:03:17.862 00:03:17.862 ' 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:17.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.862 --rc genhtml_branch_coverage=1 00:03:17.862 --rc genhtml_function_coverage=1 00:03:17.862 --rc genhtml_legend=1 00:03:17.862 --rc geninfo_all_blocks=1 00:03:17.862 --rc geninfo_unexecuted_blocks=1 00:03:17.862 00:03:17.862 ' 00:03:17.862 18:40:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.862 18:40:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.862 18:40:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:17.862 ************************************ 00:03:17.862 START TEST env_memory 00:03:17.862 ************************************ 00:03:17.862 18:40:39 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:17.862 00:03:17.862 00:03:17.862 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.862 http://cunit.sourceforge.net/ 00:03:17.862 00:03:17.862 00:03:17.862 Suite: memory 00:03:17.862 Test: alloc and free memory map ...[2024-11-20 18:40:39.957909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:17.862 passed 00:03:17.862 Test: mem map translation ...[2024-11-20 18:40:39.976662] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:17.862 [2024-11-20 18:40:39.976675] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:17.862 [2024-11-20 18:40:39.976709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:17.862 [2024-11-20 18:40:39.976715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:17.862 passed 00:03:17.862 Test: mem map registration ...[2024-11-20 18:40:40.015356] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:17.862 [2024-11-20 18:40:40.015371] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:17.862 passed 00:03:17.862 Test: mem map adjacent registrations ...passed 00:03:17.862 00:03:17.862 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.862 suites 1 1 n/a 0 0 00:03:17.862 tests 4 4 4 0 0 00:03:17.862 asserts 152 152 152 0 n/a 00:03:17.862 00:03:17.862 Elapsed time = 0.143 seconds 00:03:17.862 00:03:17.862 real 0m0.156s 00:03:17.862 user 0m0.146s 00:03:17.862 sys 0m0.010s 00:03:17.863 18:40:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.863 18:40:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:17.863 ************************************ 00:03:17.863 END TEST env_memory 00:03:17.863 ************************************ 00:03:17.863 18:40:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:17.863 18:40:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.863 18:40:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.863 18:40:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:17.863 ************************************ 00:03:17.863 START TEST env_vtophys 00:03:17.863 ************************************ 00:03:17.863 18:40:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:17.863 EAL: lib.eal log level changed from notice to debug 00:03:17.863 EAL: Detected lcore 0 as core 0 on socket 0 00:03:17.863 EAL: Detected lcore 1 as core 1 on socket 0 00:03:17.863 EAL: Detected lcore 2 as core 2 on socket 0 00:03:17.863 EAL: Detected lcore 3 as core 3 on socket 0 00:03:17.863 EAL: Detected lcore 4 as core 4 on socket 0 00:03:17.863 EAL: Detected lcore 5 as core 5 on socket 0 00:03:17.863 EAL: Detected lcore 6 as core 6 on socket 0 00:03:17.863 EAL: Detected lcore 7 as core 8 on socket 0 00:03:17.863 EAL: Detected lcore 8 as core 9 on socket 0 00:03:17.863 EAL: Detected lcore 9 as core 10 on socket 0 00:03:17.863 EAL: Detected lcore 10 as core 11 on socket 0 00:03:17.863 EAL: Detected lcore 11 as core 12 on socket 0 00:03:17.863 EAL: Detected lcore 12 as core 13 on socket 0 00:03:17.863 EAL: Detected lcore 13 as core 16 on socket 0 00:03:17.863 EAL: Detected lcore 14 as core 17 on socket 0 00:03:17.863 EAL: Detected lcore 15 as core 18 on socket 0 00:03:17.863 EAL: Detected lcore 16 as core 19 on socket 0 00:03:17.863 EAL: Detected lcore 17 as core 20 on socket 0 00:03:17.863 EAL: Detected lcore 18 as core 21 on socket 0 00:03:17.863 EAL: Detected lcore 19 as core 25 on socket 0 00:03:17.863 EAL: Detected lcore 20 as core 26 on socket 0 00:03:17.863 EAL: Detected lcore 21 as core 27 on socket 0 00:03:17.863 EAL: Detected lcore 22 as core 28 on socket 0 00:03:17.863 EAL: Detected lcore 23 as core 29 on socket 0 00:03:17.863 EAL: Detected lcore 24 as core 0 on socket 1 00:03:17.863 EAL: Detected lcore 25 as core 1 on socket 1 00:03:17.863 EAL: Detected lcore 26 as core 2 on socket 1 00:03:17.863 EAL: Detected lcore 27 as core 3 on socket 1 00:03:17.863 EAL: Detected lcore 28 as core 4 on socket 1 00:03:17.863 EAL: Detected lcore 29 as core 5 on socket 1 00:03:17.863 EAL: Detected lcore 30 as core 6 on socket 1 00:03:17.863 EAL: Detected lcore 31 as core 8 on socket 1 00:03:17.863 EAL: Detected lcore 32 as core 10 on socket 1 00:03:17.863 EAL: Detected lcore 33 as core 11 on socket 1 00:03:17.863 EAL: Detected lcore 34 as core 12 on socket 1 00:03:17.863 EAL: Detected lcore 35 as core 13 on socket 1 00:03:17.863 EAL: Detected lcore 36 as core 16 on socket 1 00:03:17.863 EAL: Detected lcore 37 as core 17 on socket 1 00:03:17.863 EAL: Detected lcore 38 as core 18 on socket 1 00:03:17.863 EAL: Detected lcore 39 as core 19 on socket 1 00:03:17.863 EAL: Detected lcore 40 as core 20 on socket 1 00:03:17.863 EAL: Detected lcore 41 as core 21 on socket 1 00:03:17.863 EAL: Detected lcore 42 as core 24 on socket 1 00:03:17.863 EAL: Detected lcore 43 as core 25 on socket 1 00:03:17.863 EAL: Detected lcore 44 as core 26 on socket 1 00:03:17.863 EAL: Detected lcore 45 as core 27 on socket 1 00:03:17.863 EAL: Detected lcore 46 as core 28 on socket 1 00:03:17.863 EAL: Detected lcore 47 as core 29 on socket 1 00:03:17.863 EAL: Detected lcore 48 as core 0 on socket 0 00:03:17.863 EAL: Detected lcore 49 as core 1 on socket 0 00:03:17.863 EAL: Detected lcore 50 as core 2 on socket 0 00:03:17.863 EAL: Detected lcore 51 as core 3 on socket 0 00:03:17.863 EAL: Detected lcore 52 as core 4 on socket 0 00:03:17.863 EAL: Detected lcore 53 as core 5 on socket 0 00:03:17.863 EAL: Detected lcore 54 as core 6 on socket 0 00:03:17.863 EAL: Detected lcore 55 as core 8 on socket 0 00:03:17.863 EAL: Detected lcore 56 as core 9 on socket 0 00:03:17.863 EAL: Detected lcore 57 as core 10 on socket 0 00:03:17.863 EAL: Detected lcore 58 as core 11 on socket 0 00:03:17.863 EAL: Detected lcore 59 as core 12 on socket 0 00:03:17.863 EAL: Detected lcore 60 as core 13 on socket 0 00:03:17.863 EAL: Detected lcore 61 as core 16 on socket 0 00:03:17.863 EAL: Detected lcore 62 as core 17 on socket 0 00:03:17.863 EAL: Detected lcore 63 as core 18 on socket 0 00:03:17.863 EAL: Detected lcore 64 as core 19 on socket 0 00:03:17.863 EAL: Detected lcore 65 as core 20 on socket 0 00:03:17.863 EAL: Detected lcore 66 as core 21 on socket 0 00:03:17.863 EAL: Detected lcore 67 as core 25 on socket 0 00:03:17.863 EAL: Detected lcore 68 as core 26 on socket 0 00:03:17.863 EAL: Detected lcore 69 as core 27 on socket 0 00:03:17.863 EAL: Detected lcore 70 as core 28 on socket 0 00:03:17.863 EAL: Detected lcore 71 as core 29 on socket 0 00:03:17.863 EAL: Detected lcore 72 as core 0 on socket 1 00:03:17.863 EAL: Detected lcore 73 as core 1 on socket 1 00:03:17.863 EAL: Detected lcore 74 as core 2 on socket 1 00:03:17.863 EAL: Detected lcore 75 as core 3 on socket 1 00:03:17.863 EAL: Detected lcore 76 as core 4 on socket 1 00:03:17.863 EAL: Detected lcore 77 as core 5 on socket 1 00:03:17.863 EAL: Detected lcore 78 as core 6 on socket 1 00:03:17.863 EAL: Detected lcore 79 as core 8 on socket 1 00:03:17.863 EAL: Detected lcore 80 as core 10 on socket 1 00:03:17.863 EAL: Detected lcore 81 as core 11 on socket 1 00:03:17.863 EAL: Detected lcore 82 as core 12 on socket 1 00:03:17.863 EAL: Detected lcore 83 as core 13 on socket 1 00:03:17.863 EAL: Detected lcore 84 as core 16 on socket 1 00:03:17.863 EAL: Detected lcore 85 as core 17 on socket 1 00:03:17.863 EAL: Detected lcore 86 as core 18 on socket 1 00:03:17.863 EAL: Detected lcore 87 as core 19 on socket 1 00:03:17.863 EAL: Detected lcore 88 as core 20 on socket 1 00:03:17.863 EAL: Detected lcore 89 as core 21 on socket 1 00:03:17.863 EAL: Detected lcore 90 as core 24 on socket 1 00:03:17.863 EAL: Detected lcore 91 as core 25 on socket 1 00:03:17.863 EAL: Detected lcore 92 as core 26 on socket 1 00:03:17.863 EAL: Detected lcore 93 as core 27 on socket 1 00:03:17.863 EAL: Detected lcore 94 as core 28 on socket 1 00:03:17.863 EAL: Detected lcore 95 as core 29 on socket 1 00:03:17.863 EAL: Maximum logical cores by configuration: 128 00:03:17.863 EAL: Detected CPU lcores: 96 00:03:17.863 EAL: Detected NUMA nodes: 2 00:03:17.863 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:17.863 EAL: Detected shared linkage of DPDK 00:03:17.863 EAL: No shared files mode enabled, IPC will be disabled 00:03:18.123 EAL: Bus pci wants IOVA as 'DC' 00:03:18.123 EAL: Buses did not request a specific IOVA mode. 00:03:18.123 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:18.123 EAL: Selected IOVA mode 'VA' 00:03:18.123 EAL: Probing VFIO support... 00:03:18.123 EAL: IOMMU type 1 (Type 1) is supported 00:03:18.123 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:18.123 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:18.123 EAL: VFIO support initialized 00:03:18.123 EAL: Ask a virtual area of 0x2e000 bytes 00:03:18.123 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:18.123 EAL: Setting up physically contiguous memory... 00:03:18.123 EAL: Setting maximum number of open files to 524288 00:03:18.123 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:18.123 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:18.123 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:18.123 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:18.123 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.123 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:18.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.123 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.123 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:18.123 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:18.123 EAL: Hugepages will be freed exactly as allocated. 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: TSC frequency is ~2100000 KHz 00:03:18.123 EAL: Main lcore 0 is ready (tid=7fc849aeba00;cpuset=[0]) 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 0 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 2MB 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:18.123 EAL: Mem event callback 'spdk:(nil)' registered 00:03:18.123 00:03:18.123 00:03:18.123 CUnit - A unit testing framework for C - Version 2.1-3 00:03:18.123 http://cunit.sourceforge.net/ 00:03:18.123 00:03:18.123 00:03:18.123 Suite: components_suite 00:03:18.123 Test: vtophys_malloc_test ...passed 00:03:18.123 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 4MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was shrunk by 4MB 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 6MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was shrunk by 6MB 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 10MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was shrunk by 10MB 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 18MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was shrunk by 18MB 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 34MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was shrunk by 34MB 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 66MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was shrunk by 66MB 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 130MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was shrunk by 130MB 00:03:18.123 EAL: Trying to obtain current memory policy. 00:03:18.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.123 EAL: Restoring previous memory policy: 4 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.123 EAL: request: mp_malloc_sync 00:03:18.123 EAL: No shared files mode enabled, IPC is disabled 00:03:18.123 EAL: Heap on socket 0 was expanded by 258MB 00:03:18.123 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.382 EAL: request: mp_malloc_sync 00:03:18.382 EAL: No shared files mode enabled, IPC is disabled 00:03:18.382 EAL: Heap on socket 0 was shrunk by 258MB 00:03:18.382 EAL: Trying to obtain current memory policy. 00:03:18.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.382 EAL: Restoring previous memory policy: 4 00:03:18.382 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.382 EAL: request: mp_malloc_sync 00:03:18.382 EAL: No shared files mode enabled, IPC is disabled 00:03:18.382 EAL: Heap on socket 0 was expanded by 514MB 00:03:18.382 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.641 EAL: request: mp_malloc_sync 00:03:18.641 EAL: No shared files mode enabled, IPC is disabled 00:03:18.641 EAL: Heap on socket 0 was shrunk by 514MB 00:03:18.641 EAL: Trying to obtain current memory policy. 00:03:18.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.641 EAL: Restoring previous memory policy: 4 00:03:18.641 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.641 EAL: request: mp_malloc_sync 00:03:18.641 EAL: No shared files mode enabled, IPC is disabled 00:03:18.641 EAL: Heap on socket 0 was expanded by 1026MB 00:03:18.900 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.159 EAL: request: mp_malloc_sync 00:03:19.159 EAL: No shared files mode enabled, IPC is disabled 00:03:19.159 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:19.159 passed 00:03:19.159 00:03:19.159 Run Summary: Type Total Ran Passed Failed Inactive 00:03:19.159 suites 1 1 n/a 0 0 00:03:19.159 tests 2 2 2 0 0 00:03:19.159 asserts 497 497 497 0 n/a 00:03:19.159 00:03:19.159 Elapsed time = 0.969 seconds 00:03:19.159 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.159 EAL: request: mp_malloc_sync 00:03:19.159 EAL: No shared files mode enabled, IPC is disabled 00:03:19.159 EAL: Heap on socket 0 was shrunk by 2MB 00:03:19.159 EAL: No shared files mode enabled, IPC is disabled 00:03:19.159 EAL: No shared files mode enabled, IPC is disabled 00:03:19.159 EAL: No shared files mode enabled, IPC is disabled 00:03:19.159 00:03:19.159 real 0m1.105s 00:03:19.159 user 0m0.646s 00:03:19.159 sys 0m0.429s 00:03:19.159 18:40:41 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.159 18:40:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:19.159 ************************************ 00:03:19.159 END TEST env_vtophys 00:03:19.159 ************************************ 00:03:19.159 18:40:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:19.159 18:40:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:19.159 18:40:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:19.159 18:40:41 env -- common/autotest_common.sh@10 -- # set +x 00:03:19.159 ************************************ 00:03:19.159 START TEST env_pci 00:03:19.159 ************************************ 00:03:19.159 18:40:41 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:19.159 00:03:19.159 00:03:19.159 CUnit - A unit testing framework for C - Version 2.1-3 00:03:19.159 http://cunit.sourceforge.net/ 00:03:19.159 00:03:19.159 00:03:19.159 Suite: pci 00:03:19.159 Test: pci_hook ...[2024-11-20 18:40:41.329405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3445669 has claimed it 00:03:19.159 EAL: Cannot find device (10000:00:01.0) 00:03:19.159 EAL: Failed to attach device on primary process 00:03:19.159 passed 00:03:19.159 00:03:19.160 Run Summary: Type Total Ran Passed Failed Inactive 00:03:19.160 suites 1 1 n/a 0 0 00:03:19.160 tests 1 1 1 0 0 00:03:19.160 asserts 25 25 25 0 n/a 00:03:19.160 00:03:19.160 Elapsed time = 0.027 seconds 00:03:19.160 00:03:19.160 real 0m0.047s 00:03:19.160 user 0m0.014s 00:03:19.160 sys 0m0.033s 00:03:19.160 18:40:41 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.160 18:40:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:19.160 ************************************ 00:03:19.160 END TEST env_pci 00:03:19.160 ************************************ 00:03:19.160 18:40:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:19.160 18:40:41 env -- env/env.sh@15 -- # uname 00:03:19.160 18:40:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:19.160 18:40:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:19.160 18:40:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:19.160 18:40:41 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:19.160 18:40:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:19.160 18:40:41 env -- common/autotest_common.sh@10 -- # set +x 00:03:19.160 ************************************ 00:03:19.160 START TEST env_dpdk_post_init 00:03:19.160 ************************************ 00:03:19.160 18:40:41 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:19.160 EAL: Detected CPU lcores: 96 00:03:19.160 EAL: Detected NUMA nodes: 2 00:03:19.160 EAL: Detected shared linkage of DPDK 00:03:19.160 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:19.418 EAL: Selected IOVA mode 'VA' 00:03:19.418 EAL: VFIO support initialized 00:03:19.418 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:19.418 EAL: Using IOMMU type 1 (Type 1) 00:03:19.418 EAL: Ignore mapping IO port bar(1) 00:03:19.418 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:19.418 EAL: Ignore mapping IO port bar(1) 00:03:19.418 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:19.418 EAL: Ignore mapping IO port bar(1) 00:03:19.418 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:19.418 EAL: Ignore mapping IO port bar(1) 00:03:19.418 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:19.418 EAL: Ignore mapping IO port bar(1) 00:03:19.418 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:19.418 EAL: Ignore mapping IO port bar(1) 00:03:19.418 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:19.418 EAL: Ignore mapping IO port bar(1) 00:03:19.419 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:19.419 EAL: Ignore mapping IO port bar(1) 00:03:19.419 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:20.355 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:20.355 EAL: Ignore mapping IO port bar(1) 00:03:20.355 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:24.579 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:24.579 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:24.579 Starting DPDK initialization... 00:03:24.579 Starting SPDK post initialization... 00:03:24.579 SPDK NVMe probe 00:03:24.579 Attaching to 0000:5e:00.0 00:03:24.579 Attached to 0000:5e:00.0 00:03:24.579 Cleaning up... 00:03:24.579 00:03:24.579 real 0m4.951s 00:03:24.579 user 0m3.510s 00:03:24.579 sys 0m0.513s 00:03:24.579 18:40:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.579 18:40:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:24.579 ************************************ 00:03:24.579 END TEST env_dpdk_post_init 00:03:24.579 ************************************ 00:03:24.579 18:40:46 env -- env/env.sh@26 -- # uname 00:03:24.579 18:40:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:24.579 18:40:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:24.579 18:40:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.579 18:40:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.579 18:40:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.579 ************************************ 00:03:24.579 START TEST env_mem_callbacks 00:03:24.579 ************************************ 00:03:24.579 18:40:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:24.579 EAL: Detected CPU lcores: 96 00:03:24.579 EAL: Detected NUMA nodes: 2 00:03:24.579 EAL: Detected shared linkage of DPDK 00:03:24.579 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:24.579 EAL: Selected IOVA mode 'VA' 00:03:24.579 EAL: VFIO support initialized 00:03:24.579 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:24.579 00:03:24.579 00:03:24.579 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.579 http://cunit.sourceforge.net/ 00:03:24.579 00:03:24.579 00:03:24.579 Suite: memory 00:03:24.579 Test: test ... 00:03:24.579 register 0x200000200000 2097152 00:03:24.579 malloc 3145728 00:03:24.579 register 0x200000400000 4194304 00:03:24.579 buf 0x200000500000 len 3145728 PASSED 00:03:24.579 malloc 64 00:03:24.579 buf 0x2000004fff40 len 64 PASSED 00:03:24.579 malloc 4194304 00:03:24.579 register 0x200000800000 6291456 00:03:24.579 buf 0x200000a00000 len 4194304 PASSED 00:03:24.579 free 0x200000500000 3145728 00:03:24.579 free 0x2000004fff40 64 00:03:24.579 unregister 0x200000400000 4194304 PASSED 00:03:24.579 free 0x200000a00000 4194304 00:03:24.579 unregister 0x200000800000 6291456 PASSED 00:03:24.579 malloc 8388608 00:03:24.579 register 0x200000400000 10485760 00:03:24.579 buf 0x200000600000 len 8388608 PASSED 00:03:24.579 free 0x200000600000 8388608 00:03:24.579 unregister 0x200000400000 10485760 PASSED 00:03:24.579 passed 00:03:24.579 00:03:24.579 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.579 suites 1 1 n/a 0 0 00:03:24.579 tests 1 1 1 0 0 00:03:24.579 asserts 15 15 15 0 n/a 00:03:24.579 00:03:24.579 Elapsed time = 0.008 seconds 00:03:24.579 00:03:24.579 real 0m0.058s 00:03:24.579 user 0m0.016s 00:03:24.579 sys 0m0.041s 00:03:24.579 18:40:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.579 18:40:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:24.579 ************************************ 00:03:24.579 END TEST env_mem_callbacks 00:03:24.579 ************************************ 00:03:24.579 00:03:24.579 real 0m6.850s 00:03:24.579 user 0m4.570s 00:03:24.579 sys 0m1.357s 00:03:24.579 18:40:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.579 18:40:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.579 ************************************ 00:03:24.579 END TEST env 00:03:24.579 ************************************ 00:03:24.579 18:40:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:24.579 18:40:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.579 18:40:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.579 18:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:24.579 ************************************ 00:03:24.579 START TEST rpc 00:03:24.579 ************************************ 00:03:24.579 18:40:46 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:24.579 * Looking for test storage... 00:03:24.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:24.579 18:40:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:24.579 18:40:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:24.579 18:40:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:24.579 18:40:46 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:24.579 18:40:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.579 18:40:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.579 18:40:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.579 18:40:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.579 18:40:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.579 18:40:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.579 18:40:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.579 18:40:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.579 18:40:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.579 18:40:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.579 18:40:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.579 18:40:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:24.579 18:40:46 rpc -- scripts/common.sh@345 -- # : 1 00:03:24.580 18:40:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.580 18:40:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.580 18:40:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:24.580 18:40:46 rpc -- scripts/common.sh@353 -- # local d=1 00:03:24.580 18:40:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.580 18:40:46 rpc -- scripts/common.sh@355 -- # echo 1 00:03:24.580 18:40:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.580 18:40:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:24.580 18:40:46 rpc -- scripts/common.sh@353 -- # local d=2 00:03:24.580 18:40:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.580 18:40:46 rpc -- scripts/common.sh@355 -- # echo 2 00:03:24.580 18:40:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.580 18:40:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.580 18:40:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.580 18:40:46 rpc -- scripts/common.sh@368 -- # return 0 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:24.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.580 --rc genhtml_branch_coverage=1 00:03:24.580 --rc genhtml_function_coverage=1 00:03:24.580 --rc genhtml_legend=1 00:03:24.580 --rc geninfo_all_blocks=1 00:03:24.580 --rc geninfo_unexecuted_blocks=1 00:03:24.580 00:03:24.580 ' 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:24.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.580 --rc genhtml_branch_coverage=1 00:03:24.580 --rc genhtml_function_coverage=1 00:03:24.580 --rc genhtml_legend=1 00:03:24.580 --rc geninfo_all_blocks=1 00:03:24.580 --rc geninfo_unexecuted_blocks=1 00:03:24.580 00:03:24.580 ' 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:24.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.580 --rc genhtml_branch_coverage=1 00:03:24.580 --rc genhtml_function_coverage=1 00:03:24.580 --rc genhtml_legend=1 00:03:24.580 --rc geninfo_all_blocks=1 00:03:24.580 --rc geninfo_unexecuted_blocks=1 00:03:24.580 00:03:24.580 ' 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:24.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.580 --rc genhtml_branch_coverage=1 00:03:24.580 --rc genhtml_function_coverage=1 00:03:24.580 --rc genhtml_legend=1 00:03:24.580 --rc geninfo_all_blocks=1 00:03:24.580 --rc geninfo_unexecuted_blocks=1 00:03:24.580 00:03:24.580 ' 00:03:24.580 18:40:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3446719 00:03:24.580 18:40:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:24.580 18:40:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:24.580 18:40:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3446719 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 3446719 ']' 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:24.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:24.580 18:40:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.580 [2024-11-20 18:40:46.851294] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:24.580 [2024-11-20 18:40:46.851344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446719 ] 00:03:24.840 [2024-11-20 18:40:46.925431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:24.840 [2024-11-20 18:40:46.968283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:24.840 [2024-11-20 18:40:46.968317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3446719' to capture a snapshot of events at runtime. 00:03:24.840 [2024-11-20 18:40:46.968327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:24.840 [2024-11-20 18:40:46.968333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:24.840 [2024-11-20 18:40:46.968337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3446719 for offline analysis/debug. 00:03:24.840 [2024-11-20 18:40:46.968940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:25.100 18:40:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:25.100 18:40:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:25.100 18:40:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.100 18:40:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.100 18:40:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:25.100 18:40:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:25.100 18:40:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.100 18:40:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.100 18:40:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.100 ************************************ 00:03:25.100 START TEST rpc_integrity 00:03:25.100 ************************************ 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:25.100 { 00:03:25.100 "name": "Malloc0", 00:03:25.100 "aliases": [ 00:03:25.100 "8ecd8469-48af-454e-8ddc-59c8244e0766" 00:03:25.100 ], 00:03:25.100 "product_name": "Malloc disk", 00:03:25.100 "block_size": 512, 00:03:25.100 "num_blocks": 16384, 00:03:25.100 "uuid": "8ecd8469-48af-454e-8ddc-59c8244e0766", 00:03:25.100 "assigned_rate_limits": { 00:03:25.100 "rw_ios_per_sec": 0, 00:03:25.100 "rw_mbytes_per_sec": 0, 00:03:25.100 "r_mbytes_per_sec": 0, 00:03:25.100 "w_mbytes_per_sec": 0 00:03:25.100 }, 00:03:25.100 "claimed": false, 00:03:25.100 "zoned": false, 00:03:25.100 "supported_io_types": { 00:03:25.100 "read": true, 00:03:25.100 "write": true, 00:03:25.100 "unmap": true, 00:03:25.100 "flush": true, 00:03:25.100 "reset": true, 00:03:25.100 "nvme_admin": false, 00:03:25.100 "nvme_io": false, 00:03:25.100 "nvme_io_md": false, 00:03:25.100 "write_zeroes": true, 00:03:25.100 "zcopy": true, 00:03:25.100 "get_zone_info": false, 00:03:25.100 "zone_management": false, 00:03:25.100 "zone_append": false, 00:03:25.100 "compare": false, 00:03:25.100 "compare_and_write": false, 00:03:25.100 "abort": true, 00:03:25.100 "seek_hole": false, 00:03:25.100 "seek_data": false, 00:03:25.100 "copy": true, 00:03:25.100 "nvme_iov_md": false 00:03:25.100 }, 00:03:25.100 "memory_domains": [ 00:03:25.100 { 00:03:25.100 "dma_device_id": "system", 00:03:25.100 "dma_device_type": 1 00:03:25.100 }, 00:03:25.100 { 00:03:25.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.100 "dma_device_type": 2 00:03:25.100 } 00:03:25.100 ], 00:03:25.100 "driver_specific": {} 00:03:25.100 } 00:03:25.100 ]' 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.100 [2024-11-20 18:40:47.362932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:25.100 [2024-11-20 18:40:47.362962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:25.100 [2024-11-20 18:40:47.362974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bfc280 00:03:25.100 [2024-11-20 18:40:47.362981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:25.100 [2024-11-20 18:40:47.364067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:25.100 [2024-11-20 18:40:47.364086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:25.100 Passthru0 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.100 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.100 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:25.100 { 00:03:25.100 "name": "Malloc0", 00:03:25.100 "aliases": [ 00:03:25.100 "8ecd8469-48af-454e-8ddc-59c8244e0766" 00:03:25.100 ], 00:03:25.100 "product_name": "Malloc disk", 00:03:25.100 "block_size": 512, 00:03:25.100 "num_blocks": 16384, 00:03:25.100 "uuid": "8ecd8469-48af-454e-8ddc-59c8244e0766", 00:03:25.100 "assigned_rate_limits": { 00:03:25.100 "rw_ios_per_sec": 0, 00:03:25.100 "rw_mbytes_per_sec": 0, 00:03:25.100 "r_mbytes_per_sec": 0, 00:03:25.100 "w_mbytes_per_sec": 0 00:03:25.100 }, 00:03:25.100 "claimed": true, 00:03:25.100 "claim_type": "exclusive_write", 00:03:25.100 "zoned": false, 00:03:25.100 "supported_io_types": { 00:03:25.100 "read": true, 00:03:25.100 "write": true, 00:03:25.100 "unmap": true, 00:03:25.100 "flush": true, 00:03:25.100 "reset": true, 00:03:25.100 "nvme_admin": false, 00:03:25.100 "nvme_io": false, 00:03:25.100 "nvme_io_md": false, 00:03:25.100 "write_zeroes": true, 00:03:25.100 "zcopy": true, 00:03:25.100 "get_zone_info": false, 00:03:25.100 "zone_management": false, 00:03:25.100 "zone_append": false, 00:03:25.100 "compare": false, 00:03:25.100 "compare_and_write": false, 00:03:25.100 "abort": true, 00:03:25.100 "seek_hole": false, 00:03:25.100 "seek_data": false, 00:03:25.100 "copy": true, 00:03:25.100 "nvme_iov_md": false 00:03:25.100 }, 00:03:25.100 "memory_domains": [ 00:03:25.100 { 00:03:25.100 "dma_device_id": "system", 00:03:25.100 "dma_device_type": 1 00:03:25.100 }, 00:03:25.100 { 00:03:25.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.100 "dma_device_type": 2 00:03:25.100 } 00:03:25.100 ], 00:03:25.100 "driver_specific": {} 00:03:25.100 }, 00:03:25.100 { 00:03:25.100 "name": "Passthru0", 00:03:25.100 "aliases": [ 00:03:25.100 "c18ae78c-a677-5c71-936d-7eef9046416f" 00:03:25.100 ], 00:03:25.100 "product_name": "passthru", 00:03:25.101 "block_size": 512, 00:03:25.101 "num_blocks": 16384, 00:03:25.101 "uuid": "c18ae78c-a677-5c71-936d-7eef9046416f", 00:03:25.101 "assigned_rate_limits": { 00:03:25.101 "rw_ios_per_sec": 0, 00:03:25.101 "rw_mbytes_per_sec": 0, 00:03:25.101 "r_mbytes_per_sec": 0, 00:03:25.101 "w_mbytes_per_sec": 0 00:03:25.101 }, 00:03:25.101 "claimed": false, 00:03:25.101 "zoned": false, 00:03:25.101 "supported_io_types": { 00:03:25.101 "read": true, 00:03:25.101 "write": true, 00:03:25.101 "unmap": true, 00:03:25.101 "flush": true, 00:03:25.101 "reset": true, 00:03:25.101 "nvme_admin": false, 00:03:25.101 "nvme_io": false, 00:03:25.101 "nvme_io_md": false, 00:03:25.101 "write_zeroes": true, 00:03:25.101 "zcopy": true, 00:03:25.101 "get_zone_info": false, 00:03:25.101 "zone_management": false, 00:03:25.101 "zone_append": false, 00:03:25.101 "compare": false, 00:03:25.101 "compare_and_write": false, 00:03:25.101 "abort": true, 00:03:25.101 "seek_hole": false, 00:03:25.101 "seek_data": false, 00:03:25.101 "copy": true, 00:03:25.101 "nvme_iov_md": false 00:03:25.101 }, 00:03:25.101 "memory_domains": [ 00:03:25.101 { 00:03:25.101 "dma_device_id": "system", 00:03:25.101 "dma_device_type": 1 00:03:25.101 }, 00:03:25.101 { 00:03:25.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.101 "dma_device_type": 2 00:03:25.101 } 00:03:25.101 ], 00:03:25.101 "driver_specific": { 00:03:25.101 "passthru": { 00:03:25.101 "name": "Passthru0", 00:03:25.101 "base_bdev_name": "Malloc0" 00:03:25.101 } 00:03:25.101 } 00:03:25.101 } 00:03:25.101 ]' 00:03:25.101 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:25.360 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:25.360 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.360 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.360 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.360 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:25.360 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:25.360 18:40:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:25.360 00:03:25.360 real 0m0.274s 00:03:25.360 user 0m0.174s 00:03:25.360 sys 0m0.036s 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 ************************************ 00:03:25.360 END TEST rpc_integrity 00:03:25.360 ************************************ 00:03:25.360 18:40:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:25.360 18:40:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.360 18:40:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.360 18:40:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 ************************************ 00:03:25.360 START TEST rpc_plugins 00:03:25.360 ************************************ 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:25.360 { 00:03:25.360 "name": "Malloc1", 00:03:25.360 "aliases": [ 00:03:25.360 "e6de2ab3-984a-4c12-8224-86693bad013c" 00:03:25.360 ], 00:03:25.360 "product_name": "Malloc disk", 00:03:25.360 "block_size": 4096, 00:03:25.360 "num_blocks": 256, 00:03:25.360 "uuid": "e6de2ab3-984a-4c12-8224-86693bad013c", 00:03:25.360 "assigned_rate_limits": { 00:03:25.360 "rw_ios_per_sec": 0, 00:03:25.360 "rw_mbytes_per_sec": 0, 00:03:25.360 "r_mbytes_per_sec": 0, 00:03:25.360 "w_mbytes_per_sec": 0 00:03:25.360 }, 00:03:25.360 "claimed": false, 00:03:25.360 "zoned": false, 00:03:25.360 "supported_io_types": { 00:03:25.360 "read": true, 00:03:25.360 "write": true, 00:03:25.360 "unmap": true, 00:03:25.360 "flush": true, 00:03:25.360 "reset": true, 00:03:25.360 "nvme_admin": false, 00:03:25.360 "nvme_io": false, 00:03:25.360 "nvme_io_md": false, 00:03:25.360 "write_zeroes": true, 00:03:25.360 "zcopy": true, 00:03:25.360 "get_zone_info": false, 00:03:25.360 "zone_management": false, 00:03:25.360 "zone_append": false, 00:03:25.360 "compare": false, 00:03:25.360 "compare_and_write": false, 00:03:25.360 "abort": true, 00:03:25.360 "seek_hole": false, 00:03:25.360 "seek_data": false, 00:03:25.360 "copy": true, 00:03:25.360 "nvme_iov_md": false 00:03:25.360 }, 00:03:25.360 "memory_domains": [ 00:03:25.360 { 00:03:25.360 "dma_device_id": "system", 00:03:25.360 "dma_device_type": 1 00:03:25.360 }, 00:03:25.360 { 00:03:25.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.360 "dma_device_type": 2 00:03:25.360 } 00:03:25.360 ], 00:03:25.360 "driver_specific": {} 00:03:25.360 } 00:03:25.360 ]' 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.360 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:25.360 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:25.619 18:40:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:25.619 00:03:25.619 real 0m0.141s 00:03:25.619 user 0m0.087s 00:03:25.619 sys 0m0.019s 00:03:25.619 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.619 18:40:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.619 ************************************ 00:03:25.619 END TEST rpc_plugins 00:03:25.619 ************************************ 00:03:25.619 18:40:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:25.619 18:40:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.619 18:40:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.619 18:40:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.619 ************************************ 00:03:25.619 START TEST rpc_trace_cmd_test 00:03:25.619 ************************************ 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:25.619 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3446719", 00:03:25.619 "tpoint_group_mask": "0x8", 00:03:25.619 "iscsi_conn": { 00:03:25.619 "mask": "0x2", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "scsi": { 00:03:25.619 "mask": "0x4", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "bdev": { 00:03:25.619 "mask": "0x8", 00:03:25.619 "tpoint_mask": "0xffffffffffffffff" 00:03:25.619 }, 00:03:25.619 "nvmf_rdma": { 00:03:25.619 "mask": "0x10", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "nvmf_tcp": { 00:03:25.619 "mask": "0x20", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "ftl": { 00:03:25.619 "mask": "0x40", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "blobfs": { 00:03:25.619 "mask": "0x80", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "dsa": { 00:03:25.619 "mask": "0x200", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "thread": { 00:03:25.619 "mask": "0x400", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "nvme_pcie": { 00:03:25.619 "mask": "0x800", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "iaa": { 00:03:25.619 "mask": "0x1000", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "nvme_tcp": { 00:03:25.619 "mask": "0x2000", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "bdev_nvme": { 00:03:25.619 "mask": "0x4000", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "sock": { 00:03:25.619 "mask": "0x8000", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "blob": { 00:03:25.619 "mask": "0x10000", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "bdev_raid": { 00:03:25.619 "mask": "0x20000", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 }, 00:03:25.619 "scheduler": { 00:03:25.619 "mask": "0x40000", 00:03:25.619 "tpoint_mask": "0x0" 00:03:25.619 } 00:03:25.619 }' 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:25.619 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:25.878 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:25.878 18:40:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:25.878 18:40:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:25.878 00:03:25.878 real 0m0.222s 00:03:25.878 user 0m0.194s 00:03:25.878 sys 0m0.022s 00:03:25.878 18:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.878 18:40:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:25.878 ************************************ 00:03:25.878 END TEST rpc_trace_cmd_test 00:03:25.878 ************************************ 00:03:25.878 18:40:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:25.878 18:40:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:25.878 18:40:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:25.878 18:40:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.878 18:40:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.878 18:40:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.878 ************************************ 00:03:25.878 START TEST rpc_daemon_integrity 00:03:25.878 ************************************ 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:25.878 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:25.878 { 00:03:25.878 "name": "Malloc2", 00:03:25.878 "aliases": [ 00:03:25.878 "9c675ee1-7e3d-46c2-a1a9-816f8286864f" 00:03:25.878 ], 00:03:25.878 "product_name": "Malloc disk", 00:03:25.878 "block_size": 512, 00:03:25.878 "num_blocks": 16384, 00:03:25.879 "uuid": "9c675ee1-7e3d-46c2-a1a9-816f8286864f", 00:03:25.879 "assigned_rate_limits": { 00:03:25.879 "rw_ios_per_sec": 0, 00:03:25.879 "rw_mbytes_per_sec": 0, 00:03:25.879 "r_mbytes_per_sec": 0, 00:03:25.879 "w_mbytes_per_sec": 0 00:03:25.879 }, 00:03:25.879 "claimed": false, 00:03:25.879 "zoned": false, 00:03:25.879 "supported_io_types": { 00:03:25.879 "read": true, 00:03:25.879 "write": true, 00:03:25.879 "unmap": true, 00:03:25.879 "flush": true, 00:03:25.879 "reset": true, 00:03:25.879 "nvme_admin": false, 00:03:25.879 "nvme_io": false, 00:03:25.879 "nvme_io_md": false, 00:03:25.879 "write_zeroes": true, 00:03:25.879 "zcopy": true, 00:03:25.879 "get_zone_info": false, 00:03:25.879 "zone_management": false, 00:03:25.879 "zone_append": false, 00:03:25.879 "compare": false, 00:03:25.879 "compare_and_write": false, 00:03:25.879 "abort": true, 00:03:25.879 "seek_hole": false, 00:03:25.879 "seek_data": false, 00:03:25.879 "copy": true, 00:03:25.879 "nvme_iov_md": false 00:03:25.879 }, 00:03:25.879 "memory_domains": [ 00:03:25.879 { 00:03:25.879 "dma_device_id": "system", 00:03:25.879 "dma_device_type": 1 00:03:25.879 }, 00:03:25.879 { 00:03:25.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.879 "dma_device_type": 2 00:03:25.879 } 00:03:25.879 ], 00:03:25.879 "driver_specific": {} 00:03:25.879 } 00:03:25.879 ]' 00:03:25.879 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:25.879 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:25.879 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:25.879 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:25.879 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.138 [2024-11-20 18:40:48.205243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:26.138 [2024-11-20 18:40:48.205270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:26.138 [2024-11-20 18:40:48.205282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bfe150 00:03:26.138 [2024-11-20 18:40:48.205288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:26.138 [2024-11-20 18:40:48.206257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:26.138 [2024-11-20 18:40:48.206276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:26.138 Passthru0 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:26.138 { 00:03:26.138 "name": "Malloc2", 00:03:26.138 "aliases": [ 00:03:26.138 "9c675ee1-7e3d-46c2-a1a9-816f8286864f" 00:03:26.138 ], 00:03:26.138 "product_name": "Malloc disk", 00:03:26.138 "block_size": 512, 00:03:26.138 "num_blocks": 16384, 00:03:26.138 "uuid": "9c675ee1-7e3d-46c2-a1a9-816f8286864f", 00:03:26.138 "assigned_rate_limits": { 00:03:26.138 "rw_ios_per_sec": 0, 00:03:26.138 "rw_mbytes_per_sec": 0, 00:03:26.138 "r_mbytes_per_sec": 0, 00:03:26.138 "w_mbytes_per_sec": 0 00:03:26.138 }, 00:03:26.138 "claimed": true, 00:03:26.138 "claim_type": "exclusive_write", 00:03:26.138 "zoned": false, 00:03:26.138 "supported_io_types": { 00:03:26.138 "read": true, 00:03:26.138 "write": true, 00:03:26.138 "unmap": true, 00:03:26.138 "flush": true, 00:03:26.138 "reset": true, 00:03:26.138 "nvme_admin": false, 00:03:26.138 "nvme_io": false, 00:03:26.138 "nvme_io_md": false, 00:03:26.138 "write_zeroes": true, 00:03:26.138 "zcopy": true, 00:03:26.138 "get_zone_info": false, 00:03:26.138 "zone_management": false, 00:03:26.138 "zone_append": false, 00:03:26.138 "compare": false, 00:03:26.138 "compare_and_write": false, 00:03:26.138 "abort": true, 00:03:26.138 "seek_hole": false, 00:03:26.138 "seek_data": false, 00:03:26.138 "copy": true, 00:03:26.138 "nvme_iov_md": false 00:03:26.138 }, 00:03:26.138 "memory_domains": [ 00:03:26.138 { 00:03:26.138 "dma_device_id": "system", 00:03:26.138 "dma_device_type": 1 00:03:26.138 }, 00:03:26.138 { 00:03:26.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:26.138 "dma_device_type": 2 00:03:26.138 } 00:03:26.138 ], 00:03:26.138 "driver_specific": {} 00:03:26.138 }, 00:03:26.138 { 00:03:26.138 "name": "Passthru0", 00:03:26.138 "aliases": [ 00:03:26.138 "890e24a0-b9f0-59b8-8c4c-1eae54f1e881" 00:03:26.138 ], 00:03:26.138 "product_name": "passthru", 00:03:26.138 "block_size": 512, 00:03:26.138 "num_blocks": 16384, 00:03:26.138 "uuid": "890e24a0-b9f0-59b8-8c4c-1eae54f1e881", 00:03:26.138 "assigned_rate_limits": { 00:03:26.138 "rw_ios_per_sec": 0, 00:03:26.138 "rw_mbytes_per_sec": 0, 00:03:26.138 "r_mbytes_per_sec": 0, 00:03:26.138 "w_mbytes_per_sec": 0 00:03:26.138 }, 00:03:26.138 "claimed": false, 00:03:26.138 "zoned": false, 00:03:26.138 "supported_io_types": { 00:03:26.138 "read": true, 00:03:26.138 "write": true, 00:03:26.138 "unmap": true, 00:03:26.138 "flush": true, 00:03:26.138 "reset": true, 00:03:26.138 "nvme_admin": false, 00:03:26.138 "nvme_io": false, 00:03:26.138 "nvme_io_md": false, 00:03:26.138 "write_zeroes": true, 00:03:26.138 "zcopy": true, 00:03:26.138 "get_zone_info": false, 00:03:26.138 "zone_management": false, 00:03:26.138 "zone_append": false, 00:03:26.138 "compare": false, 00:03:26.138 "compare_and_write": false, 00:03:26.138 "abort": true, 00:03:26.138 "seek_hole": false, 00:03:26.138 "seek_data": false, 00:03:26.138 "copy": true, 00:03:26.138 "nvme_iov_md": false 00:03:26.138 }, 00:03:26.138 "memory_domains": [ 00:03:26.138 { 00:03:26.138 "dma_device_id": "system", 00:03:26.138 "dma_device_type": 1 00:03:26.138 }, 00:03:26.138 { 00:03:26.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:26.138 "dma_device_type": 2 00:03:26.138 } 00:03:26.138 ], 00:03:26.138 "driver_specific": { 00:03:26.138 "passthru": { 00:03:26.138 "name": "Passthru0", 00:03:26.138 "base_bdev_name": "Malloc2" 00:03:26.138 } 00:03:26.138 } 00:03:26.138 } 00:03:26.138 ]' 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:26.138 00:03:26.138 real 0m0.279s 00:03:26.138 user 0m0.179s 00:03:26.138 sys 0m0.034s 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.138 18:40:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.138 ************************************ 00:03:26.138 END TEST rpc_daemon_integrity 00:03:26.138 ************************************ 00:03:26.138 18:40:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:26.138 18:40:48 rpc -- rpc/rpc.sh@84 -- # killprocess 3446719 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 3446719 ']' 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@958 -- # kill -0 3446719 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@959 -- # uname 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446719 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446719' 00:03:26.138 killing process with pid 3446719 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@973 -- # kill 3446719 00:03:26.138 18:40:48 rpc -- common/autotest_common.sh@978 -- # wait 3446719 00:03:26.707 00:03:26.707 real 0m2.110s 00:03:26.707 user 0m2.708s 00:03:26.707 sys 0m0.671s 00:03:26.707 18:40:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.707 18:40:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.707 ************************************ 00:03:26.707 END TEST rpc 00:03:26.707 ************************************ 00:03:26.707 18:40:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:26.707 18:40:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.707 18:40:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.707 18:40:48 -- common/autotest_common.sh@10 -- # set +x 00:03:26.707 ************************************ 00:03:26.707 START TEST skip_rpc 00:03:26.707 ************************************ 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:26.707 * Looking for test storage... 00:03:26.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:26.707 18:40:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.707 --rc genhtml_branch_coverage=1 00:03:26.707 --rc genhtml_function_coverage=1 00:03:26.707 --rc genhtml_legend=1 00:03:26.707 --rc geninfo_all_blocks=1 00:03:26.707 --rc geninfo_unexecuted_blocks=1 00:03:26.707 00:03:26.707 ' 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.707 --rc genhtml_branch_coverage=1 00:03:26.707 --rc genhtml_function_coverage=1 00:03:26.707 --rc genhtml_legend=1 00:03:26.707 --rc geninfo_all_blocks=1 00:03:26.707 --rc geninfo_unexecuted_blocks=1 00:03:26.707 00:03:26.707 ' 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.707 --rc genhtml_branch_coverage=1 00:03:26.707 --rc genhtml_function_coverage=1 00:03:26.707 --rc genhtml_legend=1 00:03:26.707 --rc geninfo_all_blocks=1 00:03:26.707 --rc geninfo_unexecuted_blocks=1 00:03:26.707 00:03:26.707 ' 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.707 --rc genhtml_branch_coverage=1 00:03:26.707 --rc genhtml_function_coverage=1 00:03:26.707 --rc genhtml_legend=1 00:03:26.707 --rc geninfo_all_blocks=1 00:03:26.707 --rc geninfo_unexecuted_blocks=1 00:03:26.707 00:03:26.707 ' 00:03:26.707 18:40:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:26.707 18:40:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:26.707 18:40:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.707 18:40:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.707 ************************************ 00:03:26.707 START TEST skip_rpc 00:03:26.707 ************************************ 00:03:26.707 18:40:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:26.707 18:40:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3447354 00:03:26.707 18:40:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:26.707 18:40:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:26.707 18:40:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:27.037 [2024-11-20 18:40:49.067922] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:27.037 [2024-11-20 18:40:49.067959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447354 ] 00:03:27.037 [2024-11-20 18:40:49.142046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.037 [2024-11-20 18:40:49.181678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3447354 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3447354 ']' 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3447354 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447354 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447354' 00:03:32.474 killing process with pid 3447354 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3447354 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3447354 00:03:32.474 00:03:32.474 real 0m5.368s 00:03:32.474 user 0m5.130s 00:03:32.474 sys 0m0.276s 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.474 18:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.474 ************************************ 00:03:32.474 END TEST skip_rpc 00:03:32.474 ************************************ 00:03:32.474 18:40:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:32.474 18:40:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.474 18:40:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.474 18:40:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.474 ************************************ 00:03:32.474 START TEST skip_rpc_with_json 00:03:32.474 ************************************ 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3448310 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3448310 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3448310 ']' 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:32.474 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.474 [2024-11-20 18:40:54.507739] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:32.474 [2024-11-20 18:40:54.507780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448310 ] 00:03:32.474 [2024-11-20 18:40:54.580590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.474 [2024-11-20 18:40:54.622388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.733 [2024-11-20 18:40:54.835867] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:32.733 request: 00:03:32.733 { 00:03:32.733 "trtype": "tcp", 00:03:32.733 "method": "nvmf_get_transports", 00:03:32.733 "req_id": 1 00:03:32.733 } 00:03:32.733 Got JSON-RPC error response 00:03:32.733 response: 00:03:32.733 { 00:03:32.733 "code": -19, 00:03:32.733 "message": "No such device" 00:03:32.733 } 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.733 [2024-11-20 18:40:54.847969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.733 18:40:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.733 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.733 18:40:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:32.733 { 00:03:32.733 "subsystems": [ 00:03:32.733 { 00:03:32.734 "subsystem": "fsdev", 00:03:32.734 "config": [ 00:03:32.734 { 00:03:32.734 "method": "fsdev_set_opts", 00:03:32.734 "params": { 00:03:32.734 "fsdev_io_pool_size": 65535, 00:03:32.734 "fsdev_io_cache_size": 256 00:03:32.734 } 00:03:32.734 } 00:03:32.734 ] 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "vfio_user_target", 00:03:32.734 "config": null 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "keyring", 00:03:32.734 "config": [] 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "iobuf", 00:03:32.734 "config": [ 00:03:32.734 { 00:03:32.734 "method": "iobuf_set_options", 00:03:32.734 "params": { 00:03:32.734 "small_pool_count": 8192, 00:03:32.734 "large_pool_count": 1024, 00:03:32.734 "small_bufsize": 8192, 00:03:32.734 "large_bufsize": 135168, 00:03:32.734 "enable_numa": false 00:03:32.734 } 00:03:32.734 } 00:03:32.734 ] 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "sock", 00:03:32.734 "config": [ 00:03:32.734 { 00:03:32.734 "method": "sock_set_default_impl", 00:03:32.734 "params": { 00:03:32.734 "impl_name": "posix" 00:03:32.734 } 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "method": "sock_impl_set_options", 00:03:32.734 "params": { 00:03:32.734 "impl_name": "ssl", 00:03:32.734 "recv_buf_size": 4096, 00:03:32.734 "send_buf_size": 4096, 00:03:32.734 "enable_recv_pipe": true, 00:03:32.734 "enable_quickack": false, 00:03:32.734 "enable_placement_id": 0, 00:03:32.734 "enable_zerocopy_send_server": true, 00:03:32.734 "enable_zerocopy_send_client": false, 00:03:32.734 "zerocopy_threshold": 0, 00:03:32.734 "tls_version": 0, 00:03:32.734 "enable_ktls": false 00:03:32.734 } 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "method": "sock_impl_set_options", 00:03:32.734 "params": { 00:03:32.734 "impl_name": "posix", 00:03:32.734 "recv_buf_size": 2097152, 00:03:32.734 "send_buf_size": 2097152, 00:03:32.734 "enable_recv_pipe": true, 00:03:32.734 "enable_quickack": false, 00:03:32.734 "enable_placement_id": 0, 00:03:32.734 "enable_zerocopy_send_server": true, 00:03:32.734 "enable_zerocopy_send_client": false, 00:03:32.734 "zerocopy_threshold": 0, 00:03:32.734 "tls_version": 0, 00:03:32.734 "enable_ktls": false 00:03:32.734 } 00:03:32.734 } 00:03:32.734 ] 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "vmd", 00:03:32.734 "config": [] 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "accel", 00:03:32.734 "config": [ 00:03:32.734 { 00:03:32.734 "method": "accel_set_options", 00:03:32.734 "params": { 00:03:32.734 "small_cache_size": 128, 00:03:32.734 "large_cache_size": 16, 00:03:32.734 "task_count": 2048, 00:03:32.734 "sequence_count": 2048, 00:03:32.734 "buf_count": 2048 00:03:32.734 } 00:03:32.734 } 00:03:32.734 ] 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "bdev", 00:03:32.734 "config": [ 00:03:32.734 { 00:03:32.734 "method": "bdev_set_options", 00:03:32.734 "params": { 00:03:32.734 "bdev_io_pool_size": 65535, 00:03:32.734 "bdev_io_cache_size": 256, 00:03:32.734 "bdev_auto_examine": true, 00:03:32.734 "iobuf_small_cache_size": 128, 00:03:32.734 "iobuf_large_cache_size": 16 00:03:32.734 } 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "method": "bdev_raid_set_options", 00:03:32.734 "params": { 00:03:32.734 "process_window_size_kb": 1024, 00:03:32.734 "process_max_bandwidth_mb_sec": 0 00:03:32.734 } 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "method": "bdev_iscsi_set_options", 00:03:32.734 "params": { 00:03:32.734 "timeout_sec": 30 00:03:32.734 } 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "method": "bdev_nvme_set_options", 00:03:32.734 "params": { 00:03:32.734 "action_on_timeout": "none", 00:03:32.734 "timeout_us": 0, 00:03:32.734 "timeout_admin_us": 0, 00:03:32.734 "keep_alive_timeout_ms": 10000, 00:03:32.734 "arbitration_burst": 0, 00:03:32.734 "low_priority_weight": 0, 00:03:32.734 "medium_priority_weight": 0, 00:03:32.734 "high_priority_weight": 0, 00:03:32.734 "nvme_adminq_poll_period_us": 10000, 00:03:32.734 "nvme_ioq_poll_period_us": 0, 00:03:32.734 "io_queue_requests": 0, 00:03:32.734 "delay_cmd_submit": true, 00:03:32.734 "transport_retry_count": 4, 00:03:32.734 "bdev_retry_count": 3, 00:03:32.734 "transport_ack_timeout": 0, 00:03:32.734 "ctrlr_loss_timeout_sec": 0, 00:03:32.734 "reconnect_delay_sec": 0, 00:03:32.734 "fast_io_fail_timeout_sec": 0, 00:03:32.734 "disable_auto_failback": false, 00:03:32.734 "generate_uuids": false, 00:03:32.734 "transport_tos": 0, 00:03:32.734 "nvme_error_stat": false, 00:03:32.734 "rdma_srq_size": 0, 00:03:32.734 "io_path_stat": false, 00:03:32.734 "allow_accel_sequence": false, 00:03:32.734 "rdma_max_cq_size": 0, 00:03:32.734 "rdma_cm_event_timeout_ms": 0, 00:03:32.734 "dhchap_digests": [ 00:03:32.734 "sha256", 00:03:32.734 "sha384", 00:03:32.734 "sha512" 00:03:32.734 ], 00:03:32.734 "dhchap_dhgroups": [ 00:03:32.734 "null", 00:03:32.734 "ffdhe2048", 00:03:32.734 "ffdhe3072", 00:03:32.734 "ffdhe4096", 00:03:32.734 "ffdhe6144", 00:03:32.734 "ffdhe8192" 00:03:32.734 ] 00:03:32.734 } 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "method": "bdev_nvme_set_hotplug", 00:03:32.734 "params": { 00:03:32.734 "period_us": 100000, 00:03:32.734 "enable": false 00:03:32.734 } 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "method": "bdev_wait_for_examine" 00:03:32.734 } 00:03:32.734 ] 00:03:32.734 }, 00:03:32.734 { 00:03:32.734 "subsystem": "scsi", 00:03:32.735 "config": null 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "subsystem": "scheduler", 00:03:32.735 "config": [ 00:03:32.735 { 00:03:32.735 "method": "framework_set_scheduler", 00:03:32.735 "params": { 00:03:32.735 "name": "static" 00:03:32.735 } 00:03:32.735 } 00:03:32.735 ] 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "subsystem": "vhost_scsi", 00:03:32.735 "config": [] 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "subsystem": "vhost_blk", 00:03:32.735 "config": [] 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "subsystem": "ublk", 00:03:32.735 "config": [] 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "subsystem": "nbd", 00:03:32.735 "config": [] 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "subsystem": "nvmf", 00:03:32.735 "config": [ 00:03:32.735 { 00:03:32.735 "method": "nvmf_set_config", 00:03:32.735 "params": { 00:03:32.735 "discovery_filter": "match_any", 00:03:32.735 "admin_cmd_passthru": { 00:03:32.735 "identify_ctrlr": false 00:03:32.735 }, 00:03:32.735 "dhchap_digests": [ 00:03:32.735 "sha256", 00:03:32.735 "sha384", 00:03:32.735 "sha512" 00:03:32.735 ], 00:03:32.735 "dhchap_dhgroups": [ 00:03:32.735 "null", 00:03:32.735 "ffdhe2048", 00:03:32.735 "ffdhe3072", 00:03:32.735 "ffdhe4096", 00:03:32.735 "ffdhe6144", 00:03:32.735 "ffdhe8192" 00:03:32.735 ] 00:03:32.735 } 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "method": "nvmf_set_max_subsystems", 00:03:32.735 "params": { 00:03:32.735 "max_subsystems": 1024 00:03:32.735 } 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "method": "nvmf_set_crdt", 00:03:32.735 "params": { 00:03:32.735 "crdt1": 0, 00:03:32.735 "crdt2": 0, 00:03:32.735 "crdt3": 0 00:03:32.735 } 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "method": "nvmf_create_transport", 00:03:32.735 "params": { 00:03:32.735 "trtype": "TCP", 00:03:32.735 "max_queue_depth": 128, 00:03:32.735 "max_io_qpairs_per_ctrlr": 127, 00:03:32.735 "in_capsule_data_size": 4096, 00:03:32.735 "max_io_size": 131072, 00:03:32.735 "io_unit_size": 131072, 00:03:32.735 "max_aq_depth": 128, 00:03:32.735 "num_shared_buffers": 511, 00:03:32.735 "buf_cache_size": 4294967295, 00:03:32.735 "dif_insert_or_strip": false, 00:03:32.735 "zcopy": false, 00:03:32.735 "c2h_success": true, 00:03:32.735 "sock_priority": 0, 00:03:32.735 "abort_timeout_sec": 1, 00:03:32.735 "ack_timeout": 0, 00:03:32.735 "data_wr_pool_size": 0 00:03:32.735 } 00:03:32.735 } 00:03:32.735 ] 00:03:32.735 }, 00:03:32.735 { 00:03:32.735 "subsystem": "iscsi", 00:03:32.735 "config": [ 00:03:32.735 { 00:03:32.735 "method": "iscsi_set_options", 00:03:32.735 "params": { 00:03:32.735 "node_base": "iqn.2016-06.io.spdk", 00:03:32.735 "max_sessions": 128, 00:03:32.735 "max_connections_per_session": 2, 00:03:32.735 "max_queue_depth": 64, 00:03:32.735 "default_time2wait": 2, 00:03:32.735 "default_time2retain": 20, 00:03:32.735 "first_burst_length": 8192, 00:03:32.735 "immediate_data": true, 00:03:32.735 "allow_duplicated_isid": false, 00:03:32.735 "error_recovery_level": 0, 00:03:32.735 "nop_timeout": 60, 00:03:32.735 "nop_in_interval": 30, 00:03:32.735 "disable_chap": false, 00:03:32.735 "require_chap": false, 00:03:32.735 "mutual_chap": false, 00:03:32.735 "chap_group": 0, 00:03:32.735 "max_large_datain_per_connection": 64, 00:03:32.735 "max_r2t_per_connection": 4, 00:03:32.735 "pdu_pool_size": 36864, 00:03:32.735 "immediate_data_pool_size": 16384, 00:03:32.735 "data_out_pool_size": 2048 00:03:32.735 } 00:03:32.735 } 00:03:32.735 ] 00:03:32.735 } 00:03:32.735 ] 00:03:32.735 } 00:03:32.735 18:40:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:32.735 18:40:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3448310 00:03:32.735 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3448310 ']' 00:03:32.735 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3448310 00:03:32.735 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:32.735 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:32.735 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3448310 00:03:32.995 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:32.995 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:32.995 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3448310' 00:03:32.995 killing process with pid 3448310 00:03:32.995 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3448310 00:03:32.995 18:40:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3448310 00:03:33.254 18:40:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3448528 00:03:33.254 18:40:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:33.254 18:40:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3448528 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3448528 ']' 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3448528 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3448528 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3448528' 00:03:38.524 killing process with pid 3448528 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3448528 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3448528 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:38.524 00:03:38.524 real 0m6.286s 00:03:38.524 user 0m5.983s 00:03:38.524 sys 0m0.595s 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:38.524 ************************************ 00:03:38.524 END TEST skip_rpc_with_json 00:03:38.524 ************************************ 00:03:38.524 18:41:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:38.524 18:41:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.524 18:41:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.524 18:41:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.524 ************************************ 00:03:38.524 START TEST skip_rpc_with_delay 00:03:38.524 ************************************ 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.524 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:38.525 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.525 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:38.525 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.525 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:38.525 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:38.784 [2024-11-20 18:41:00.870058] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:38.784 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:38.784 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:38.784 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:38.784 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:38.784 00:03:38.784 real 0m0.072s 00:03:38.784 user 0m0.043s 00:03:38.784 sys 0m0.028s 00:03:38.784 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.784 18:41:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:38.784 ************************************ 00:03:38.784 END TEST skip_rpc_with_delay 00:03:38.784 ************************************ 00:03:38.784 18:41:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:38.784 18:41:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:38.784 18:41:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:38.784 18:41:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.784 18:41:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.784 18:41:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.784 ************************************ 00:03:38.784 START TEST exit_on_failed_rpc_init 00:03:38.784 ************************************ 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3449518 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3449518 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3449518 ']' 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:38.784 18:41:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:38.784 [2024-11-20 18:41:01.009462] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:38.784 [2024-11-20 18:41:01.009504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449518 ] 00:03:38.784 [2024-11-20 18:41:01.081735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.043 [2024-11-20 18:41:01.122655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:39.044 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:39.305 [2024-11-20 18:41:01.401183] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:39.305 [2024-11-20 18:41:01.401236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449524 ] 00:03:39.305 [2024-11-20 18:41:01.475366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.305 [2024-11-20 18:41:01.515956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:39.305 [2024-11-20 18:41:01.516013] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:39.305 [2024-11-20 18:41:01.516022] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:39.305 [2024-11-20 18:41:01.516030] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3449518 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3449518 ']' 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3449518 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3449518 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3449518' 00:03:39.305 killing process with pid 3449518 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3449518 00:03:39.305 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3449518 00:03:39.873 00:03:39.873 real 0m0.958s 00:03:39.873 user 0m1.021s 00:03:39.873 sys 0m0.391s 00:03:39.873 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.873 18:41:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.873 ************************************ 00:03:39.873 END TEST exit_on_failed_rpc_init 00:03:39.873 ************************************ 00:03:39.873 18:41:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.873 00:03:39.873 real 0m13.147s 00:03:39.873 user 0m12.405s 00:03:39.873 sys 0m1.557s 00:03:39.873 18:41:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.873 18:41:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.873 ************************************ 00:03:39.873 END TEST skip_rpc 00:03:39.873 ************************************ 00:03:39.873 18:41:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.873 18:41:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.873 18:41:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.873 18:41:01 -- common/autotest_common.sh@10 -- # set +x 00:03:39.873 ************************************ 00:03:39.873 START TEST rpc_client 00:03:39.873 ************************************ 00:03:39.873 18:41:02 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.873 * Looking for test storage... 00:03:39.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:39.873 18:41:02 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:39.873 18:41:02 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:39.873 18:41:02 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:39.873 18:41:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.873 18:41:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:40.134 18:41:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.134 18:41:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.134 18:41:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.134 18:41:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:40.134 18:41:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.134 18:41:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:40.134 OK 00:03:40.134 18:41:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:40.134 00:03:40.134 real 0m0.202s 00:03:40.134 user 0m0.131s 00:03:40.134 sys 0m0.085s 00:03:40.134 18:41:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.134 18:41:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:40.134 ************************************ 00:03:40.134 END TEST rpc_client 00:03:40.134 ************************************ 00:03:40.134 18:41:02 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:40.134 18:41:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.134 18:41:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.134 18:41:02 -- common/autotest_common.sh@10 -- # set +x 00:03:40.134 ************************************ 00:03:40.134 START TEST json_config 00:03:40.134 ************************************ 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.134 18:41:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.134 18:41:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.134 18:41:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.134 18:41:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.134 18:41:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.134 18:41:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.134 18:41:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.134 18:41:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:40.134 18:41:02 json_config -- scripts/common.sh@345 -- # : 1 00:03:40.134 18:41:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.134 18:41:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.134 18:41:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:40.134 18:41:02 json_config -- scripts/common.sh@353 -- # local d=1 00:03:40.134 18:41:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.134 18:41:02 json_config -- scripts/common.sh@355 -- # echo 1 00:03:40.134 18:41:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.134 18:41:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@353 -- # local d=2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.134 18:41:02 json_config -- scripts/common.sh@355 -- # echo 2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.134 18:41:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.134 18:41:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.134 18:41:02 json_config -- scripts/common.sh@368 -- # return 0 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:40.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.134 --rc genhtml_branch_coverage=1 00:03:40.134 --rc genhtml_function_coverage=1 00:03:40.134 --rc genhtml_legend=1 00:03:40.134 --rc geninfo_all_blocks=1 00:03:40.134 --rc geninfo_unexecuted_blocks=1 00:03:40.134 00:03:40.134 ' 00:03:40.134 18:41:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.134 18:41:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:40.395 18:41:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:40.395 18:41:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.395 18:41:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.395 18:41:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.395 18:41:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.395 18:41:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.395 18:41:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.395 18:41:02 json_config -- paths/export.sh@5 -- # export PATH 00:03:40.395 18:41:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@51 -- # : 0 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:40.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:40.395 18:41:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:40.395 INFO: JSON configuration test init 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.395 18:41:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:40.395 18:41:02 json_config -- json_config/common.sh@9 -- # local app=target 00:03:40.395 18:41:02 json_config -- json_config/common.sh@10 -- # shift 00:03:40.395 18:41:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:40.395 18:41:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:40.395 18:41:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:40.395 18:41:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:40.395 18:41:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:40.395 18:41:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3449877 00:03:40.395 18:41:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:40.395 Waiting for target to run... 00:03:40.395 18:41:02 json_config -- json_config/common.sh@25 -- # waitforlisten 3449877 /var/tmp/spdk_tgt.sock 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 3449877 ']' 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:40.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:40.395 18:41:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:40.395 18:41:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.395 [2024-11-20 18:41:02.537269] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:40.395 [2024-11-20 18:41:02.537311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449877 ] 00:03:40.964 [2024-11-20 18:41:02.987555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.964 [2024-11-20 18:41:03.040353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.222 18:41:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.222 18:41:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:41.222 18:41:03 json_config -- json_config/common.sh@26 -- # echo '' 00:03:41.222 00:03:41.222 18:41:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:41.222 18:41:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:41.222 18:41:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.222 18:41:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.222 18:41:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:41.222 18:41:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:41.222 18:41:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:41.222 18:41:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.222 18:41:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:41.222 18:41:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:41.222 18:41:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:44.511 18:41:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.511 18:41:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:44.511 18:41:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:44.512 18:41:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@54 -- # sort 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:44.512 18:41:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:44.512 18:41:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:44.512 18:41:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.512 18:41:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:44.512 18:41:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:44.512 18:41:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:44.771 MallocForNvmf0 00:03:44.771 18:41:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:44.771 18:41:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:45.029 MallocForNvmf1 00:03:45.029 18:41:07 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:45.029 18:41:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:45.029 [2024-11-20 18:41:07.306129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:45.029 18:41:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:45.029 18:41:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:45.288 18:41:07 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:45.288 18:41:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:45.548 18:41:07 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:45.548 18:41:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:45.807 18:41:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:45.807 18:41:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:45.807 [2024-11-20 18:41:08.096591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:45.807 18:41:08 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:45.807 18:41:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:45.807 18:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.066 18:41:08 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:46.066 18:41:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:46.066 18:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.066 18:41:08 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:46.066 18:41:08 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:46.066 18:41:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:46.066 MallocBdevForConfigChangeCheck 00:03:46.066 18:41:08 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:46.066 18:41:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:46.066 18:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.325 18:41:08 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:46.325 18:41:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.584 18:41:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:46.584 INFO: shutting down applications... 00:03:46.584 18:41:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:46.584 18:41:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:46.584 18:41:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:46.584 18:41:08 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:49.121 Calling clear_iscsi_subsystem 00:03:49.121 Calling clear_nvmf_subsystem 00:03:49.121 Calling clear_nbd_subsystem 00:03:49.121 Calling clear_ublk_subsystem 00:03:49.121 Calling clear_vhost_blk_subsystem 00:03:49.121 Calling clear_vhost_scsi_subsystem 00:03:49.121 Calling clear_bdev_subsystem 00:03:49.121 18:41:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:49.121 18:41:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:49.121 18:41:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:49.121 18:41:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.121 18:41:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:49.121 18:41:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:49.121 18:41:11 json_config -- json_config/json_config.sh@352 -- # break 00:03:49.121 18:41:11 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:49.121 18:41:11 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:49.121 18:41:11 json_config -- json_config/common.sh@31 -- # local app=target 00:03:49.121 18:41:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:49.121 18:41:11 json_config -- json_config/common.sh@35 -- # [[ -n 3449877 ]] 00:03:49.121 18:41:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3449877 00:03:49.121 18:41:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:49.121 18:41:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:49.121 18:41:11 json_config -- json_config/common.sh@41 -- # kill -0 3449877 00:03:49.121 18:41:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:49.689 18:41:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:49.689 18:41:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:49.689 18:41:11 json_config -- json_config/common.sh@41 -- # kill -0 3449877 00:03:49.689 18:41:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:49.689 18:41:11 json_config -- json_config/common.sh@43 -- # break 00:03:49.689 18:41:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:49.689 18:41:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:49.689 SPDK target shutdown done 00:03:49.689 18:41:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:49.689 INFO: relaunching applications... 00:03:49.690 18:41:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.690 18:41:11 json_config -- json_config/common.sh@9 -- # local app=target 00:03:49.690 18:41:11 json_config -- json_config/common.sh@10 -- # shift 00:03:49.690 18:41:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:49.690 18:41:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:49.690 18:41:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:49.690 18:41:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.690 18:41:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.690 18:41:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3451600 00:03:49.690 18:41:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:49.690 Waiting for target to run... 00:03:49.690 18:41:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.690 18:41:11 json_config -- json_config/common.sh@25 -- # waitforlisten 3451600 /var/tmp/spdk_tgt.sock 00:03:49.690 18:41:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 3451600 ']' 00:03:49.690 18:41:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:49.690 18:41:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.690 18:41:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:49.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:49.690 18:41:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.690 18:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.690 [2024-11-20 18:41:11.861365] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:49.690 [2024-11-20 18:41:11.861428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451600 ] 00:03:50.258 [2024-11-20 18:41:12.318420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.258 [2024-11-20 18:41:12.375377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.547 [2024-11-20 18:41:15.406024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:53.547 [2024-11-20 18:41:15.438386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:53.805 18:41:16 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:53.805 18:41:16 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:53.805 18:41:16 json_config -- json_config/common.sh@26 -- # echo '' 00:03:53.805 00:03:53.805 18:41:16 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:53.805 18:41:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:53.805 INFO: Checking if target configuration is the same... 00:03:53.805 18:41:16 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:53.805 18:41:16 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.806 18:41:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.806 + '[' 2 -ne 2 ']' 00:03:53.806 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:53.806 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:53.806 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.806 +++ basename /dev/fd/62 00:03:53.806 ++ mktemp /tmp/62.XXX 00:03:53.806 + tmp_file_1=/tmp/62.QFF 00:03:53.806 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.806 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:53.806 + tmp_file_2=/tmp/spdk_tgt_config.json.7Qs 00:03:53.806 + ret=0 00:03:53.806 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.373 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.373 + diff -u /tmp/62.QFF /tmp/spdk_tgt_config.json.7Qs 00:03:54.373 + echo 'INFO: JSON config files are the same' 00:03:54.373 INFO: JSON config files are the same 00:03:54.373 + rm /tmp/62.QFF /tmp/spdk_tgt_config.json.7Qs 00:03:54.373 + exit 0 00:03:54.373 18:41:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:54.373 18:41:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:54.373 INFO: changing configuration and checking if this can be detected... 00:03:54.373 18:41:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:54.373 18:41:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:54.373 18:41:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:54.373 18:41:16 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.373 18:41:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.373 + '[' 2 -ne 2 ']' 00:03:54.373 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:54.373 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:54.373 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:54.373 +++ basename /dev/fd/62 00:03:54.373 ++ mktemp /tmp/62.XXX 00:03:54.631 + tmp_file_1=/tmp/62.an9 00:03:54.631 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.631 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:54.631 + tmp_file_2=/tmp/spdk_tgt_config.json.0Tt 00:03:54.631 + ret=0 00:03:54.631 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.890 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.890 + diff -u /tmp/62.an9 /tmp/spdk_tgt_config.json.0Tt 00:03:54.890 + ret=1 00:03:54.890 + echo '=== Start of file: /tmp/62.an9 ===' 00:03:54.890 + cat /tmp/62.an9 00:03:54.890 + echo '=== End of file: /tmp/62.an9 ===' 00:03:54.890 + echo '' 00:03:54.890 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0Tt ===' 00:03:54.890 + cat /tmp/spdk_tgt_config.json.0Tt 00:03:54.890 + echo '=== End of file: /tmp/spdk_tgt_config.json.0Tt ===' 00:03:54.890 + echo '' 00:03:54.890 + rm /tmp/62.an9 /tmp/spdk_tgt_config.json.0Tt 00:03:54.890 + exit 1 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:54.890 INFO: configuration change detected. 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@324 -- # [[ -n 3451600 ]] 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.890 18:41:17 json_config -- json_config/json_config.sh@330 -- # killprocess 3451600 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@954 -- # '[' -z 3451600 ']' 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@958 -- # kill -0 3451600 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@959 -- # uname 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451600 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451600' 00:03:54.890 killing process with pid 3451600 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@973 -- # kill 3451600 00:03:54.890 18:41:17 json_config -- common/autotest_common.sh@978 -- # wait 3451600 00:03:57.423 18:41:19 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.423 18:41:19 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:57.423 18:41:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.423 18:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.423 18:41:19 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:57.423 18:41:19 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:57.423 INFO: Success 00:03:57.423 00:03:57.423 real 0m17.014s 00:03:57.423 user 0m17.396s 00:03:57.423 sys 0m2.768s 00:03:57.423 18:41:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.423 18:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.423 ************************************ 00:03:57.423 END TEST json_config 00:03:57.423 ************************************ 00:03:57.423 18:41:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.423 18:41:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.423 18:41:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.423 18:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:57.423 ************************************ 00:03:57.423 START TEST json_config_extra_key 00:03:57.423 ************************************ 00:03:57.423 18:41:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.423 18:41:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.423 18:41:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.423 18:41:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.423 18:41:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.423 18:41:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:57.424 18:41:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.424 18:41:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.424 --rc genhtml_branch_coverage=1 00:03:57.424 --rc genhtml_function_coverage=1 00:03:57.424 --rc genhtml_legend=1 00:03:57.424 --rc geninfo_all_blocks=1 00:03:57.424 --rc geninfo_unexecuted_blocks=1 00:03:57.424 00:03:57.424 ' 00:03:57.424 18:41:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.424 --rc genhtml_branch_coverage=1 00:03:57.424 --rc genhtml_function_coverage=1 00:03:57.424 --rc genhtml_legend=1 00:03:57.424 --rc geninfo_all_blocks=1 00:03:57.424 --rc geninfo_unexecuted_blocks=1 00:03:57.424 00:03:57.424 ' 00:03:57.424 18:41:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.424 --rc genhtml_branch_coverage=1 00:03:57.424 --rc genhtml_function_coverage=1 00:03:57.424 --rc genhtml_legend=1 00:03:57.424 --rc geninfo_all_blocks=1 00:03:57.424 --rc geninfo_unexecuted_blocks=1 00:03:57.424 00:03:57.424 ' 00:03:57.424 18:41:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.424 --rc genhtml_branch_coverage=1 00:03:57.424 --rc genhtml_function_coverage=1 00:03:57.424 --rc genhtml_legend=1 00:03:57.424 --rc geninfo_all_blocks=1 00:03:57.424 --rc geninfo_unexecuted_blocks=1 00:03:57.424 00:03:57.424 ' 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.424 18:41:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.424 18:41:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.424 18:41:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.424 18:41:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.424 18:41:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:57.424 18:41:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.424 18:41:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.424 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:57.424 INFO: launching applications... 00:03:57.425 18:41:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3452905 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.425 Waiting for target to run... 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3452905 /var/tmp/spdk_tgt.sock 00:03:57.425 18:41:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3452905 ']' 00:03:57.425 18:41:19 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.425 18:41:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.425 18:41:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.425 18:41:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.425 18:41:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.425 18:41:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:57.425 [2024-11-20 18:41:19.606592] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:57.425 [2024-11-20 18:41:19.606641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452905 ] 00:03:57.992 [2024-11-20 18:41:20.060193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.992 [2024-11-20 18:41:20.104182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.251 18:41:20 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.251 18:41:20 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:58.251 00:03:58.251 18:41:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:58.251 INFO: shutting down applications... 00:03:58.251 18:41:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3452905 ]] 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3452905 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3452905 00:03:58.251 18:41:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.819 18:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.819 18:41:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.819 18:41:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3452905 00:03:58.819 18:41:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.819 18:41:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:58.819 18:41:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.819 18:41:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.819 SPDK target shutdown done 00:03:58.819 18:41:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:58.819 Success 00:03:58.819 00:03:58.819 real 0m1.561s 00:03:58.819 user 0m1.172s 00:03:58.819 sys 0m0.566s 00:03:58.820 18:41:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.820 18:41:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.820 ************************************ 00:03:58.820 END TEST json_config_extra_key 00:03:58.820 ************************************ 00:03:58.820 18:41:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.820 18:41:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.820 18:41:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.820 18:41:20 -- common/autotest_common.sh@10 -- # set +x 00:03:58.820 ************************************ 00:03:58.820 START TEST alias_rpc 00:03:58.820 ************************************ 00:03:58.820 18:41:21 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.820 * Looking for test storage... 00:03:58.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:58.820 18:41:21 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.820 18:41:21 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.820 18:41:21 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.079 18:41:21 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.079 18:41:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:59.079 18:41:21 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.079 18:41:21 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.080 --rc genhtml_branch_coverage=1 00:03:59.080 --rc genhtml_function_coverage=1 00:03:59.080 --rc genhtml_legend=1 00:03:59.080 --rc geninfo_all_blocks=1 00:03:59.080 --rc geninfo_unexecuted_blocks=1 00:03:59.080 00:03:59.080 ' 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.080 --rc genhtml_branch_coverage=1 00:03:59.080 --rc genhtml_function_coverage=1 00:03:59.080 --rc genhtml_legend=1 00:03:59.080 --rc geninfo_all_blocks=1 00:03:59.080 --rc geninfo_unexecuted_blocks=1 00:03:59.080 00:03:59.080 ' 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.080 --rc genhtml_branch_coverage=1 00:03:59.080 --rc genhtml_function_coverage=1 00:03:59.080 --rc genhtml_legend=1 00:03:59.080 --rc geninfo_all_blocks=1 00:03:59.080 --rc geninfo_unexecuted_blocks=1 00:03:59.080 00:03:59.080 ' 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.080 --rc genhtml_branch_coverage=1 00:03:59.080 --rc genhtml_function_coverage=1 00:03:59.080 --rc genhtml_legend=1 00:03:59.080 --rc geninfo_all_blocks=1 00:03:59.080 --rc geninfo_unexecuted_blocks=1 00:03:59.080 00:03:59.080 ' 00:03:59.080 18:41:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:59.080 18:41:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3453235 00:03:59.080 18:41:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3453235 00:03:59.080 18:41:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3453235 ']' 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.080 18:41:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.080 [2024-11-20 18:41:21.244127] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:03:59.080 [2024-11-20 18:41:21.244175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453235 ] 00:03:59.080 [2024-11-20 18:41:21.319513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.080 [2024-11-20 18:41:21.361323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.339 18:41:21 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.339 18:41:21 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:59.339 18:41:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:59.598 18:41:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3453235 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3453235 ']' 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3453235 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3453235 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3453235' 00:03:59.598 killing process with pid 3453235 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@973 -- # kill 3453235 00:03:59.598 18:41:21 alias_rpc -- common/autotest_common.sh@978 -- # wait 3453235 00:03:59.857 00:03:59.857 real 0m1.141s 00:03:59.857 user 0m1.163s 00:03:59.857 sys 0m0.416s 00:03:59.857 18:41:22 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.857 18:41:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.857 ************************************ 00:03:59.857 END TEST alias_rpc 00:03:59.857 ************************************ 00:04:00.117 18:41:22 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:00.117 18:41:22 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:00.117 18:41:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.117 18:41:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.117 18:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:00.117 ************************************ 00:04:00.117 START TEST spdkcli_tcp 00:04:00.117 ************************************ 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:00.117 * Looking for test storage... 00:04:00.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.117 18:41:22 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.117 --rc genhtml_branch_coverage=1 00:04:00.117 --rc genhtml_function_coverage=1 00:04:00.117 --rc genhtml_legend=1 00:04:00.117 --rc geninfo_all_blocks=1 00:04:00.117 --rc geninfo_unexecuted_blocks=1 00:04:00.117 00:04:00.117 ' 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.117 --rc genhtml_branch_coverage=1 00:04:00.117 --rc genhtml_function_coverage=1 00:04:00.117 --rc genhtml_legend=1 00:04:00.117 --rc geninfo_all_blocks=1 00:04:00.117 --rc geninfo_unexecuted_blocks=1 00:04:00.117 00:04:00.117 ' 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.117 --rc genhtml_branch_coverage=1 00:04:00.117 --rc genhtml_function_coverage=1 00:04:00.117 --rc genhtml_legend=1 00:04:00.117 --rc geninfo_all_blocks=1 00:04:00.117 --rc geninfo_unexecuted_blocks=1 00:04:00.117 00:04:00.117 ' 00:04:00.117 18:41:22 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.117 --rc genhtml_branch_coverage=1 00:04:00.117 --rc genhtml_function_coverage=1 00:04:00.117 --rc genhtml_legend=1 00:04:00.117 --rc geninfo_all_blocks=1 00:04:00.117 --rc geninfo_unexecuted_blocks=1 00:04:00.117 00:04:00.117 ' 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3453491 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3453491 00:04:00.118 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3453491 ']' 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.118 18:41:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.376 [2024-11-20 18:41:22.449899] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:00.376 [2024-11-20 18:41:22.449945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453491 ] 00:04:00.376 [2024-11-20 18:41:22.524132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:00.376 [2024-11-20 18:41:22.564875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:00.376 [2024-11-20 18:41:22.564876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.635 18:41:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.635 18:41:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:00.635 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3453675 00:04:00.635 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:00.635 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:00.894 [ 00:04:00.894 "bdev_malloc_delete", 00:04:00.894 "bdev_malloc_create", 00:04:00.894 "bdev_null_resize", 00:04:00.894 "bdev_null_delete", 00:04:00.894 "bdev_null_create", 00:04:00.894 "bdev_nvme_cuse_unregister", 00:04:00.894 "bdev_nvme_cuse_register", 00:04:00.894 "bdev_opal_new_user", 00:04:00.894 "bdev_opal_set_lock_state", 00:04:00.894 "bdev_opal_delete", 00:04:00.894 "bdev_opal_get_info", 00:04:00.894 "bdev_opal_create", 00:04:00.894 "bdev_nvme_opal_revert", 00:04:00.894 "bdev_nvme_opal_init", 00:04:00.894 "bdev_nvme_send_cmd", 00:04:00.894 "bdev_nvme_set_keys", 00:04:00.894 "bdev_nvme_get_path_iostat", 00:04:00.894 "bdev_nvme_get_mdns_discovery_info", 00:04:00.894 "bdev_nvme_stop_mdns_discovery", 00:04:00.894 "bdev_nvme_start_mdns_discovery", 00:04:00.894 "bdev_nvme_set_multipath_policy", 00:04:00.894 "bdev_nvme_set_preferred_path", 00:04:00.894 "bdev_nvme_get_io_paths", 00:04:00.894 "bdev_nvme_remove_error_injection", 00:04:00.894 "bdev_nvme_add_error_injection", 00:04:00.894 "bdev_nvme_get_discovery_info", 00:04:00.894 "bdev_nvme_stop_discovery", 00:04:00.894 "bdev_nvme_start_discovery", 00:04:00.894 "bdev_nvme_get_controller_health_info", 00:04:00.895 "bdev_nvme_disable_controller", 00:04:00.895 "bdev_nvme_enable_controller", 00:04:00.895 "bdev_nvme_reset_controller", 00:04:00.895 "bdev_nvme_get_transport_statistics", 00:04:00.895 "bdev_nvme_apply_firmware", 00:04:00.895 "bdev_nvme_detach_controller", 00:04:00.895 "bdev_nvme_get_controllers", 00:04:00.895 "bdev_nvme_attach_controller", 00:04:00.895 "bdev_nvme_set_hotplug", 00:04:00.895 "bdev_nvme_set_options", 00:04:00.895 "bdev_passthru_delete", 00:04:00.895 "bdev_passthru_create", 00:04:00.895 "bdev_lvol_set_parent_bdev", 00:04:00.895 "bdev_lvol_set_parent", 00:04:00.895 "bdev_lvol_check_shallow_copy", 00:04:00.895 "bdev_lvol_start_shallow_copy", 00:04:00.895 "bdev_lvol_grow_lvstore", 00:04:00.895 "bdev_lvol_get_lvols", 00:04:00.895 "bdev_lvol_get_lvstores", 00:04:00.895 "bdev_lvol_delete", 00:04:00.895 "bdev_lvol_set_read_only", 00:04:00.895 "bdev_lvol_resize", 00:04:00.895 "bdev_lvol_decouple_parent", 00:04:00.895 "bdev_lvol_inflate", 00:04:00.895 "bdev_lvol_rename", 00:04:00.895 "bdev_lvol_clone_bdev", 00:04:00.895 "bdev_lvol_clone", 00:04:00.895 "bdev_lvol_snapshot", 00:04:00.895 "bdev_lvol_create", 00:04:00.895 "bdev_lvol_delete_lvstore", 00:04:00.895 "bdev_lvol_rename_lvstore", 00:04:00.895 "bdev_lvol_create_lvstore", 00:04:00.895 "bdev_raid_set_options", 00:04:00.895 "bdev_raid_remove_base_bdev", 00:04:00.895 "bdev_raid_add_base_bdev", 00:04:00.895 "bdev_raid_delete", 00:04:00.895 "bdev_raid_create", 00:04:00.895 "bdev_raid_get_bdevs", 00:04:00.895 "bdev_error_inject_error", 00:04:00.895 "bdev_error_delete", 00:04:00.895 "bdev_error_create", 00:04:00.895 "bdev_split_delete", 00:04:00.895 "bdev_split_create", 00:04:00.895 "bdev_delay_delete", 00:04:00.895 "bdev_delay_create", 00:04:00.895 "bdev_delay_update_latency", 00:04:00.895 "bdev_zone_block_delete", 00:04:00.895 "bdev_zone_block_create", 00:04:00.895 "blobfs_create", 00:04:00.895 "blobfs_detect", 00:04:00.895 "blobfs_set_cache_size", 00:04:00.895 "bdev_aio_delete", 00:04:00.895 "bdev_aio_rescan", 00:04:00.895 "bdev_aio_create", 00:04:00.895 "bdev_ftl_set_property", 00:04:00.895 "bdev_ftl_get_properties", 00:04:00.895 "bdev_ftl_get_stats", 00:04:00.895 "bdev_ftl_unmap", 00:04:00.895 "bdev_ftl_unload", 00:04:00.895 "bdev_ftl_delete", 00:04:00.895 "bdev_ftl_load", 00:04:00.895 "bdev_ftl_create", 00:04:00.895 "bdev_virtio_attach_controller", 00:04:00.895 "bdev_virtio_scsi_get_devices", 00:04:00.895 "bdev_virtio_detach_controller", 00:04:00.895 "bdev_virtio_blk_set_hotplug", 00:04:00.895 "bdev_iscsi_delete", 00:04:00.895 "bdev_iscsi_create", 00:04:00.895 "bdev_iscsi_set_options", 00:04:00.895 "accel_error_inject_error", 00:04:00.895 "ioat_scan_accel_module", 00:04:00.895 "dsa_scan_accel_module", 00:04:00.895 "iaa_scan_accel_module", 00:04:00.895 "vfu_virtio_create_fs_endpoint", 00:04:00.895 "vfu_virtio_create_scsi_endpoint", 00:04:00.895 "vfu_virtio_scsi_remove_target", 00:04:00.895 "vfu_virtio_scsi_add_target", 00:04:00.895 "vfu_virtio_create_blk_endpoint", 00:04:00.895 "vfu_virtio_delete_endpoint", 00:04:00.895 "keyring_file_remove_key", 00:04:00.895 "keyring_file_add_key", 00:04:00.895 "keyring_linux_set_options", 00:04:00.895 "fsdev_aio_delete", 00:04:00.895 "fsdev_aio_create", 00:04:00.895 "iscsi_get_histogram", 00:04:00.895 "iscsi_enable_histogram", 00:04:00.895 "iscsi_set_options", 00:04:00.895 "iscsi_get_auth_groups", 00:04:00.895 "iscsi_auth_group_remove_secret", 00:04:00.895 "iscsi_auth_group_add_secret", 00:04:00.895 "iscsi_delete_auth_group", 00:04:00.895 "iscsi_create_auth_group", 00:04:00.895 "iscsi_set_discovery_auth", 00:04:00.895 "iscsi_get_options", 00:04:00.895 "iscsi_target_node_request_logout", 00:04:00.895 "iscsi_target_node_set_redirect", 00:04:00.895 "iscsi_target_node_set_auth", 00:04:00.895 "iscsi_target_node_add_lun", 00:04:00.895 "iscsi_get_stats", 00:04:00.895 "iscsi_get_connections", 00:04:00.895 "iscsi_portal_group_set_auth", 00:04:00.895 "iscsi_start_portal_group", 00:04:00.895 "iscsi_delete_portal_group", 00:04:00.895 "iscsi_create_portal_group", 00:04:00.895 "iscsi_get_portal_groups", 00:04:00.895 "iscsi_delete_target_node", 00:04:00.895 "iscsi_target_node_remove_pg_ig_maps", 00:04:00.895 "iscsi_target_node_add_pg_ig_maps", 00:04:00.895 "iscsi_create_target_node", 00:04:00.895 "iscsi_get_target_nodes", 00:04:00.895 "iscsi_delete_initiator_group", 00:04:00.895 "iscsi_initiator_group_remove_initiators", 00:04:00.895 "iscsi_initiator_group_add_initiators", 00:04:00.895 "iscsi_create_initiator_group", 00:04:00.895 "iscsi_get_initiator_groups", 00:04:00.895 "nvmf_set_crdt", 00:04:00.895 "nvmf_set_config", 00:04:00.895 "nvmf_set_max_subsystems", 00:04:00.895 "nvmf_stop_mdns_prr", 00:04:00.895 "nvmf_publish_mdns_prr", 00:04:00.895 "nvmf_subsystem_get_listeners", 00:04:00.895 "nvmf_subsystem_get_qpairs", 00:04:00.895 "nvmf_subsystem_get_controllers", 00:04:00.895 "nvmf_get_stats", 00:04:00.895 "nvmf_get_transports", 00:04:00.895 "nvmf_create_transport", 00:04:00.895 "nvmf_get_targets", 00:04:00.895 "nvmf_delete_target", 00:04:00.895 "nvmf_create_target", 00:04:00.895 "nvmf_subsystem_allow_any_host", 00:04:00.895 "nvmf_subsystem_set_keys", 00:04:00.895 "nvmf_subsystem_remove_host", 00:04:00.895 "nvmf_subsystem_add_host", 00:04:00.895 "nvmf_ns_remove_host", 00:04:00.895 "nvmf_ns_add_host", 00:04:00.895 "nvmf_subsystem_remove_ns", 00:04:00.895 "nvmf_subsystem_set_ns_ana_group", 00:04:00.895 "nvmf_subsystem_add_ns", 00:04:00.895 "nvmf_subsystem_listener_set_ana_state", 00:04:00.895 "nvmf_discovery_get_referrals", 00:04:00.895 "nvmf_discovery_remove_referral", 00:04:00.895 "nvmf_discovery_add_referral", 00:04:00.895 "nvmf_subsystem_remove_listener", 00:04:00.895 "nvmf_subsystem_add_listener", 00:04:00.895 "nvmf_delete_subsystem", 00:04:00.895 "nvmf_create_subsystem", 00:04:00.895 "nvmf_get_subsystems", 00:04:00.895 "env_dpdk_get_mem_stats", 00:04:00.895 "nbd_get_disks", 00:04:00.895 "nbd_stop_disk", 00:04:00.895 "nbd_start_disk", 00:04:00.895 "ublk_recover_disk", 00:04:00.895 "ublk_get_disks", 00:04:00.895 "ublk_stop_disk", 00:04:00.895 "ublk_start_disk", 00:04:00.895 "ublk_destroy_target", 00:04:00.895 "ublk_create_target", 00:04:00.895 "virtio_blk_create_transport", 00:04:00.895 "virtio_blk_get_transports", 00:04:00.895 "vhost_controller_set_coalescing", 00:04:00.895 "vhost_get_controllers", 00:04:00.895 "vhost_delete_controller", 00:04:00.895 "vhost_create_blk_controller", 00:04:00.895 "vhost_scsi_controller_remove_target", 00:04:00.895 "vhost_scsi_controller_add_target", 00:04:00.895 "vhost_start_scsi_controller", 00:04:00.895 "vhost_create_scsi_controller", 00:04:00.895 "thread_set_cpumask", 00:04:00.895 "scheduler_set_options", 00:04:00.895 "framework_get_governor", 00:04:00.895 "framework_get_scheduler", 00:04:00.895 "framework_set_scheduler", 00:04:00.895 "framework_get_reactors", 00:04:00.895 "thread_get_io_channels", 00:04:00.895 "thread_get_pollers", 00:04:00.895 "thread_get_stats", 00:04:00.895 "framework_monitor_context_switch", 00:04:00.895 "spdk_kill_instance", 00:04:00.895 "log_enable_timestamps", 00:04:00.895 "log_get_flags", 00:04:00.895 "log_clear_flag", 00:04:00.895 "log_set_flag", 00:04:00.895 "log_get_level", 00:04:00.895 "log_set_level", 00:04:00.895 "log_get_print_level", 00:04:00.895 "log_set_print_level", 00:04:00.896 "framework_enable_cpumask_locks", 00:04:00.896 "framework_disable_cpumask_locks", 00:04:00.896 "framework_wait_init", 00:04:00.896 "framework_start_init", 00:04:00.896 "scsi_get_devices", 00:04:00.896 "bdev_get_histogram", 00:04:00.896 "bdev_enable_histogram", 00:04:00.896 "bdev_set_qos_limit", 00:04:00.896 "bdev_set_qd_sampling_period", 00:04:00.896 "bdev_get_bdevs", 00:04:00.896 "bdev_reset_iostat", 00:04:00.896 "bdev_get_iostat", 00:04:00.896 "bdev_examine", 00:04:00.896 "bdev_wait_for_examine", 00:04:00.896 "bdev_set_options", 00:04:00.896 "accel_get_stats", 00:04:00.896 "accel_set_options", 00:04:00.896 "accel_set_driver", 00:04:00.896 "accel_crypto_key_destroy", 00:04:00.896 "accel_crypto_keys_get", 00:04:00.896 "accel_crypto_key_create", 00:04:00.896 "accel_assign_opc", 00:04:00.896 "accel_get_module_info", 00:04:00.896 "accel_get_opc_assignments", 00:04:00.896 "vmd_rescan", 00:04:00.896 "vmd_remove_device", 00:04:00.896 "vmd_enable", 00:04:00.896 "sock_get_default_impl", 00:04:00.896 "sock_set_default_impl", 00:04:00.896 "sock_impl_set_options", 00:04:00.896 "sock_impl_get_options", 00:04:00.896 "iobuf_get_stats", 00:04:00.896 "iobuf_set_options", 00:04:00.896 "keyring_get_keys", 00:04:00.896 "vfu_tgt_set_base_path", 00:04:00.896 "framework_get_pci_devices", 00:04:00.896 "framework_get_config", 00:04:00.896 "framework_get_subsystems", 00:04:00.896 "fsdev_set_opts", 00:04:00.896 "fsdev_get_opts", 00:04:00.896 "trace_get_info", 00:04:00.896 "trace_get_tpoint_group_mask", 00:04:00.896 "trace_disable_tpoint_group", 00:04:00.896 "trace_enable_tpoint_group", 00:04:00.896 "trace_clear_tpoint_mask", 00:04:00.896 "trace_set_tpoint_mask", 00:04:00.896 "notify_get_notifications", 00:04:00.896 "notify_get_types", 00:04:00.896 "spdk_get_version", 00:04:00.896 "rpc_get_methods" 00:04:00.896 ] 00:04:00.896 18:41:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:00.896 18:41:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.896 18:41:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.896 18:41:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:00.896 18:41:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3453491 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3453491 ']' 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3453491 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3453491 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3453491' 00:04:00.896 killing process with pid 3453491 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3453491 00:04:00.896 18:41:23 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3453491 00:04:01.155 00:04:01.155 real 0m1.154s 00:04:01.155 user 0m1.949s 00:04:01.155 sys 0m0.444s 00:04:01.155 18:41:23 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.155 18:41:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.155 ************************************ 00:04:01.155 END TEST spdkcli_tcp 00:04:01.155 ************************************ 00:04:01.155 18:41:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:01.155 18:41:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.155 18:41:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.155 18:41:23 -- common/autotest_common.sh@10 -- # set +x 00:04:01.155 ************************************ 00:04:01.155 START TEST dpdk_mem_utility 00:04:01.155 ************************************ 00:04:01.155 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:01.414 * Looking for test storage... 00:04:01.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:01.414 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.414 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.414 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.414 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.414 18:41:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.415 18:41:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.415 --rc genhtml_branch_coverage=1 00:04:01.415 --rc genhtml_function_coverage=1 00:04:01.415 --rc genhtml_legend=1 00:04:01.415 --rc geninfo_all_blocks=1 00:04:01.415 --rc geninfo_unexecuted_blocks=1 00:04:01.415 00:04:01.415 ' 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.415 --rc genhtml_branch_coverage=1 00:04:01.415 --rc genhtml_function_coverage=1 00:04:01.415 --rc genhtml_legend=1 00:04:01.415 --rc geninfo_all_blocks=1 00:04:01.415 --rc geninfo_unexecuted_blocks=1 00:04:01.415 00:04:01.415 ' 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.415 --rc genhtml_branch_coverage=1 00:04:01.415 --rc genhtml_function_coverage=1 00:04:01.415 --rc genhtml_legend=1 00:04:01.415 --rc geninfo_all_blocks=1 00:04:01.415 --rc geninfo_unexecuted_blocks=1 00:04:01.415 00:04:01.415 ' 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.415 --rc genhtml_branch_coverage=1 00:04:01.415 --rc genhtml_function_coverage=1 00:04:01.415 --rc genhtml_legend=1 00:04:01.415 --rc geninfo_all_blocks=1 00:04:01.415 --rc geninfo_unexecuted_blocks=1 00:04:01.415 00:04:01.415 ' 00:04:01.415 18:41:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:01.415 18:41:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3453793 00:04:01.415 18:41:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.415 18:41:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3453793 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3453793 ']' 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.415 18:41:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:01.415 [2024-11-20 18:41:23.675157] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:01.415 [2024-11-20 18:41:23.675208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453793 ] 00:04:01.674 [2024-11-20 18:41:23.748331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.674 [2024-11-20 18:41:23.787600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.934 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.934 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:01.934 18:41:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:01.934 18:41:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:01.934 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.934 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:01.934 { 00:04:01.934 "filename": "/tmp/spdk_mem_dump.txt" 00:04:01.934 } 00:04:01.934 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.934 18:41:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:01.934 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:01.934 1 heaps totaling size 818.000000 MiB 00:04:01.934 size: 818.000000 MiB heap id: 0 00:04:01.934 end heaps---------- 00:04:01.934 9 mempools totaling size 603.782043 MiB 00:04:01.934 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:01.934 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:01.934 size: 100.555481 MiB name: bdev_io_3453793 00:04:01.934 size: 50.003479 MiB name: msgpool_3453793 00:04:01.934 size: 36.509338 MiB name: fsdev_io_3453793 00:04:01.934 size: 21.763794 MiB name: PDU_Pool 00:04:01.934 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:01.934 size: 4.133484 MiB name: evtpool_3453793 00:04:01.934 size: 0.026123 MiB name: Session_Pool 00:04:01.934 end mempools------- 00:04:01.934 6 memzones totaling size 4.142822 MiB 00:04:01.934 size: 1.000366 MiB name: RG_ring_0_3453793 00:04:01.934 size: 1.000366 MiB name: RG_ring_1_3453793 00:04:01.934 size: 1.000366 MiB name: RG_ring_4_3453793 00:04:01.934 size: 1.000366 MiB name: RG_ring_5_3453793 00:04:01.934 size: 0.125366 MiB name: RG_ring_2_3453793 00:04:01.934 size: 0.015991 MiB name: RG_ring_3_3453793 00:04:01.934 end memzones------- 00:04:01.934 18:41:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:01.934 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:01.934 list of free elements. size: 10.852478 MiB 00:04:01.934 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:01.934 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:01.934 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:01.934 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:01.934 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:01.934 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:01.934 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:01.934 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:01.934 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:01.934 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:01.934 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:01.934 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:01.934 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:01.934 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:01.934 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:01.934 list of standard malloc elements. size: 199.218628 MiB 00:04:01.934 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:01.934 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:01.934 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:01.934 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:01.934 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:01.934 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:01.934 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:01.934 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:01.934 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:01.934 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:01.934 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:01.934 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:01.934 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:01.934 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:01.934 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:01.934 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:01.934 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:01.934 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:01.934 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:01.935 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:01.935 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:01.935 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:01.935 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:01.935 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:01.935 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:01.935 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:01.935 list of memzone associated elements. size: 607.928894 MiB 00:04:01.935 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:01.935 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:01.935 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:01.935 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:01.935 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:01.935 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3453793_0 00:04:01.935 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:01.935 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3453793_0 00:04:01.935 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:01.935 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3453793_0 00:04:01.935 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:01.935 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:01.935 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:01.935 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:01.935 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:01.935 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3453793_0 00:04:01.935 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:01.935 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3453793 00:04:01.935 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:01.935 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3453793 00:04:01.935 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:01.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:01.935 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:01.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:01.935 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:01.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:01.935 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:01.935 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:01.935 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:01.935 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3453793 00:04:01.935 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:01.935 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3453793 00:04:01.935 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:01.935 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3453793 00:04:01.935 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:01.935 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3453793 00:04:01.935 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:01.935 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3453793 00:04:01.935 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:01.935 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3453793 00:04:01.935 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:01.935 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:01.935 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:01.935 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:01.935 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:01.935 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:01.935 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:01.935 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3453793 00:04:01.935 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:01.935 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3453793 00:04:01.935 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:01.935 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:01.935 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:01.935 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:01.935 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:01.935 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3453793 00:04:01.935 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:01.935 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:01.935 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:01.935 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3453793 00:04:01.935 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:01.935 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3453793 00:04:01.935 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:01.935 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3453793 00:04:01.935 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:01.935 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:01.935 18:41:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:01.935 18:41:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3453793 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3453793 ']' 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3453793 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3453793 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3453793' 00:04:01.935 killing process with pid 3453793 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3453793 00:04:01.935 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3453793 00:04:02.195 00:04:02.195 real 0m0.997s 00:04:02.195 user 0m0.941s 00:04:02.195 sys 0m0.374s 00:04:02.195 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.195 18:41:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.195 ************************************ 00:04:02.195 END TEST dpdk_mem_utility 00:04:02.195 ************************************ 00:04:02.195 18:41:24 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:02.195 18:41:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.195 18:41:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.195 18:41:24 -- common/autotest_common.sh@10 -- # set +x 00:04:02.195 ************************************ 00:04:02.195 START TEST event 00:04:02.195 ************************************ 00:04:02.195 18:41:24 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:02.454 * Looking for test storage... 00:04:02.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.454 18:41:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.454 18:41:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.454 18:41:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.454 18:41:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.454 18:41:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.454 18:41:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.454 18:41:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.454 18:41:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.454 18:41:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.454 18:41:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.454 18:41:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.454 18:41:24 event -- scripts/common.sh@344 -- # case "$op" in 00:04:02.454 18:41:24 event -- scripts/common.sh@345 -- # : 1 00:04:02.454 18:41:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.454 18:41:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.454 18:41:24 event -- scripts/common.sh@365 -- # decimal 1 00:04:02.454 18:41:24 event -- scripts/common.sh@353 -- # local d=1 00:04:02.454 18:41:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.454 18:41:24 event -- scripts/common.sh@355 -- # echo 1 00:04:02.454 18:41:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.454 18:41:24 event -- scripts/common.sh@366 -- # decimal 2 00:04:02.454 18:41:24 event -- scripts/common.sh@353 -- # local d=2 00:04:02.454 18:41:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.454 18:41:24 event -- scripts/common.sh@355 -- # echo 2 00:04:02.454 18:41:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.454 18:41:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.454 18:41:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.454 18:41:24 event -- scripts/common.sh@368 -- # return 0 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.454 --rc genhtml_branch_coverage=1 00:04:02.454 --rc genhtml_function_coverage=1 00:04:02.454 --rc genhtml_legend=1 00:04:02.454 --rc geninfo_all_blocks=1 00:04:02.454 --rc geninfo_unexecuted_blocks=1 00:04:02.454 00:04:02.454 ' 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.454 --rc genhtml_branch_coverage=1 00:04:02.454 --rc genhtml_function_coverage=1 00:04:02.454 --rc genhtml_legend=1 00:04:02.454 --rc geninfo_all_blocks=1 00:04:02.454 --rc geninfo_unexecuted_blocks=1 00:04:02.454 00:04:02.454 ' 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.454 --rc genhtml_branch_coverage=1 00:04:02.454 --rc genhtml_function_coverage=1 00:04:02.454 --rc genhtml_legend=1 00:04:02.454 --rc geninfo_all_blocks=1 00:04:02.454 --rc geninfo_unexecuted_blocks=1 00:04:02.454 00:04:02.454 ' 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.454 --rc genhtml_branch_coverage=1 00:04:02.454 --rc genhtml_function_coverage=1 00:04:02.454 --rc genhtml_legend=1 00:04:02.454 --rc geninfo_all_blocks=1 00:04:02.454 --rc geninfo_unexecuted_blocks=1 00:04:02.454 00:04:02.454 ' 00:04:02.454 18:41:24 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:02.454 18:41:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:02.454 18:41:24 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:02.454 18:41:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.454 18:41:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:02.454 ************************************ 00:04:02.454 START TEST event_perf 00:04:02.454 ************************************ 00:04:02.454 18:41:24 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:02.454 Running I/O for 1 seconds...[2024-11-20 18:41:24.749823] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:02.454 [2024-11-20 18:41:24.749890] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454085 ] 00:04:02.714 [2024-11-20 18:41:24.808422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:02.714 [2024-11-20 18:41:24.851546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.714 [2024-11-20 18:41:24.851657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:02.714 [2024-11-20 18:41:24.851764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.714 [2024-11-20 18:41:24.851765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:03.649 Running I/O for 1 seconds... 00:04:03.649 lcore 0: 207525 00:04:03.649 lcore 1: 207523 00:04:03.649 lcore 2: 207524 00:04:03.649 lcore 3: 207525 00:04:03.649 done. 00:04:03.649 00:04:03.649 real 0m1.163s 00:04:03.649 user 0m4.095s 00:04:03.649 sys 0m0.064s 00:04:03.649 18:41:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.649 18:41:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:03.649 ************************************ 00:04:03.649 END TEST event_perf 00:04:03.649 ************************************ 00:04:03.649 18:41:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.649 18:41:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:03.649 18:41:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.649 18:41:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.649 ************************************ 00:04:03.649 START TEST event_reactor 00:04:03.649 ************************************ 00:04:03.649 18:41:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.907 [2024-11-20 18:41:25.977175] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:03.907 [2024-11-20 18:41:25.977236] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454335 ] 00:04:03.907 [2024-11-20 18:41:26.053252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.907 [2024-11-20 18:41:26.092230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.842 test_start 00:04:04.842 oneshot 00:04:04.842 tick 100 00:04:04.842 tick 100 00:04:04.842 tick 250 00:04:04.842 tick 100 00:04:04.842 tick 100 00:04:04.842 tick 250 00:04:04.842 tick 100 00:04:04.842 tick 500 00:04:04.842 tick 100 00:04:04.842 tick 100 00:04:04.842 tick 250 00:04:04.842 tick 100 00:04:04.842 tick 100 00:04:04.842 test_end 00:04:04.842 00:04:04.842 real 0m1.172s 00:04:04.842 user 0m1.095s 00:04:04.842 sys 0m0.073s 00:04:04.842 18:41:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.842 18:41:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:04.842 ************************************ 00:04:04.842 END TEST event_reactor 00:04:04.842 ************************************ 00:04:04.842 18:41:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:04.842 18:41:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:04.842 18:41:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.842 18:41:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.101 ************************************ 00:04:05.101 START TEST event_reactor_perf 00:04:05.101 ************************************ 00:04:05.101 18:41:27 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:05.101 [2024-11-20 18:41:27.220983] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:05.101 [2024-11-20 18:41:27.221040] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454587 ] 00:04:05.101 [2024-11-20 18:41:27.296524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.101 [2024-11-20 18:41:27.335944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.478 test_start 00:04:06.478 test_end 00:04:06.479 Performance: 522269 events per second 00:04:06.479 00:04:06.479 real 0m1.173s 00:04:06.479 user 0m1.090s 00:04:06.479 sys 0m0.079s 00:04:06.479 18:41:28 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.479 18:41:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:06.479 ************************************ 00:04:06.479 END TEST event_reactor_perf 00:04:06.479 ************************************ 00:04:06.479 18:41:28 event -- event/event.sh@49 -- # uname -s 00:04:06.479 18:41:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:06.479 18:41:28 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:06.479 18:41:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.479 18:41:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.479 18:41:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.479 ************************************ 00:04:06.479 START TEST event_scheduler 00:04:06.479 ************************************ 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:06.479 * Looking for test storage... 00:04:06.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.479 18:41:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.479 --rc genhtml_branch_coverage=1 00:04:06.479 --rc genhtml_function_coverage=1 00:04:06.479 --rc genhtml_legend=1 00:04:06.479 --rc geninfo_all_blocks=1 00:04:06.479 --rc geninfo_unexecuted_blocks=1 00:04:06.479 00:04:06.479 ' 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.479 --rc genhtml_branch_coverage=1 00:04:06.479 --rc genhtml_function_coverage=1 00:04:06.479 --rc genhtml_legend=1 00:04:06.479 --rc geninfo_all_blocks=1 00:04:06.479 --rc geninfo_unexecuted_blocks=1 00:04:06.479 00:04:06.479 ' 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.479 --rc genhtml_branch_coverage=1 00:04:06.479 --rc genhtml_function_coverage=1 00:04:06.479 --rc genhtml_legend=1 00:04:06.479 --rc geninfo_all_blocks=1 00:04:06.479 --rc geninfo_unexecuted_blocks=1 00:04:06.479 00:04:06.479 ' 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.479 --rc genhtml_branch_coverage=1 00:04:06.479 --rc genhtml_function_coverage=1 00:04:06.479 --rc genhtml_legend=1 00:04:06.479 --rc geninfo_all_blocks=1 00:04:06.479 --rc geninfo_unexecuted_blocks=1 00:04:06.479 00:04:06.479 ' 00:04:06.479 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:06.479 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3454869 00:04:06.479 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:06.479 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.479 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3454869 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3454869 ']' 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.479 18:41:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:06.479 [2024-11-20 18:41:28.665599] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:06.479 [2024-11-20 18:41:28.665644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454869 ] 00:04:06.479 [2024-11-20 18:41:28.740334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.479 [2024-11-20 18:41:28.785474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.479 [2024-11-20 18:41:28.785583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.479 [2024-11-20 18:41:28.785691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.479 [2024-11-20 18:41:28.785692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:06.739 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 [2024-11-20 18:41:28.814126] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:06.739 [2024-11-20 18:41:28.814141] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:06.739 [2024-11-20 18:41:28.814150] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:06.739 [2024-11-20 18:41:28.814155] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:06.739 [2024-11-20 18:41:28.814160] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 [2024-11-20 18:41:28.888351] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 ************************************ 00:04:06.739 START TEST scheduler_create_thread 00:04:06.739 ************************************ 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 2 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 3 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 4 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 5 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 6 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 7 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 8 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 9 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 10 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.739 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.323 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.323 18:41:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:07.323 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.323 18:41:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.700 18:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.700 18:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:08.700 18:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:08.700 18:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.700 18:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.078 18:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.078 00:04:10.078 real 0m3.099s 00:04:10.078 user 0m0.027s 00:04:10.078 sys 0m0.003s 00:04:10.078 18:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.078 18:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.078 ************************************ 00:04:10.078 END TEST scheduler_create_thread 00:04:10.078 ************************************ 00:04:10.078 18:41:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:10.078 18:41:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3454869 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3454869 ']' 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3454869 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3454869 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3454869' 00:04:10.078 killing process with pid 3454869 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3454869 00:04:10.078 18:41:32 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3454869 00:04:10.338 [2024-11-20 18:41:32.403738] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:10.338 00:04:10.338 real 0m4.145s 00:04:10.338 user 0m6.586s 00:04:10.338 sys 0m0.370s 00:04:10.338 18:41:32 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.338 18:41:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.338 ************************************ 00:04:10.338 END TEST event_scheduler 00:04:10.338 ************************************ 00:04:10.338 18:41:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:10.338 18:41:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:10.338 18:41:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.338 18:41:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.338 18:41:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.597 ************************************ 00:04:10.597 START TEST app_repeat 00:04:10.597 ************************************ 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3455611 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3455611' 00:04:10.597 Process app_repeat pid: 3455611 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:10.597 spdk_app_start Round 0 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3455611 /var/tmp/spdk-nbd.sock 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3455611 ']' 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:10.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:10.597 [2024-11-20 18:41:32.700527] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:10.597 [2024-11-20 18:41:32.700576] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455611 ] 00:04:10.597 [2024-11-20 18:41:32.777539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.597 [2024-11-20 18:41:32.820858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.597 [2024-11-20 18:41:32.820860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.597 18:41:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:10.597 18:41:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.856 Malloc0 00:04:10.856 18:41:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:11.115 Malloc1 00:04:11.115 18:41:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.115 18:41:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:11.373 /dev/nbd0 00:04:11.373 18:41:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:11.373 18:41:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:11.373 18:41:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:11.373 18:41:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:11.373 18:41:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:11.373 18:41:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:11.373 18:41:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.374 1+0 records in 00:04:11.374 1+0 records out 00:04:11.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194668 s, 21.0 MB/s 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:11.374 18:41:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:11.374 18:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.374 18:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.374 18:41:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:11.632 /dev/nbd1 00:04:11.632 18:41:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:11.632 18:41:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.632 1+0 records in 00:04:11.632 1+0 records out 00:04:11.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197893 s, 20.7 MB/s 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.632 18:41:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:11.633 18:41:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.633 18:41:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:11.633 18:41:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:11.633 18:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.633 18:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.633 18:41:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:11.633 18:41:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.633 18:41:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:11.892 { 00:04:11.892 "nbd_device": "/dev/nbd0", 00:04:11.892 "bdev_name": "Malloc0" 00:04:11.892 }, 00:04:11.892 { 00:04:11.892 "nbd_device": "/dev/nbd1", 00:04:11.892 "bdev_name": "Malloc1" 00:04:11.892 } 00:04:11.892 ]' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:11.892 { 00:04:11.892 "nbd_device": "/dev/nbd0", 00:04:11.892 "bdev_name": "Malloc0" 00:04:11.892 }, 00:04:11.892 { 00:04:11.892 "nbd_device": "/dev/nbd1", 00:04:11.892 "bdev_name": "Malloc1" 00:04:11.892 } 00:04:11.892 ]' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:11.892 /dev/nbd1' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:11.892 /dev/nbd1' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:11.892 256+0 records in 00:04:11.892 256+0 records out 00:04:11.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101213 s, 104 MB/s 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:11.892 256+0 records in 00:04:11.892 256+0 records out 00:04:11.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133663 s, 78.4 MB/s 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:11.892 256+0 records in 00:04:11.892 256+0 records out 00:04:11.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147675 s, 71.0 MB/s 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:11.892 18:41:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.151 18:41:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.410 18:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:12.669 18:41:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:12.669 18:41:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:12.929 18:41:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:12.929 [2024-11-20 18:41:35.169965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:12.929 [2024-11-20 18:41:35.206927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.929 [2024-11-20 18:41:35.206927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.929 [2024-11-20 18:41:35.247860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:12.929 [2024-11-20 18:41:35.247903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:16.210 18:41:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:16.210 18:41:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:16.210 spdk_app_start Round 1 00:04:16.210 18:41:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3455611 /var/tmp/spdk-nbd.sock 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3455611 ']' 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:16.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.210 18:41:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:16.210 18:41:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.210 Malloc0 00:04:16.210 18:41:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.469 Malloc1 00:04:16.469 18:41:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.469 18:41:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.728 /dev/nbd0 00:04:16.728 18:41:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.728 18:41:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.728 1+0 records in 00:04:16.728 1+0 records out 00:04:16.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211309 s, 19.4 MB/s 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:16.728 18:41:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:16.728 18:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.728 18:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.728 18:41:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:16.987 /dev/nbd1 00:04:16.987 18:41:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:16.987 18:41:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.987 1+0 records in 00:04:16.987 1+0 records out 00:04:16.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185844 s, 22.0 MB/s 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:16.987 18:41:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:16.987 18:41:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.987 18:41:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.987 18:41:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.987 18:41:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.987 18:41:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.246 { 00:04:17.246 "nbd_device": "/dev/nbd0", 00:04:17.246 "bdev_name": "Malloc0" 00:04:17.246 }, 00:04:17.246 { 00:04:17.246 "nbd_device": "/dev/nbd1", 00:04:17.246 "bdev_name": "Malloc1" 00:04:17.246 } 00:04:17.246 ]' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.246 { 00:04:17.246 "nbd_device": "/dev/nbd0", 00:04:17.246 "bdev_name": "Malloc0" 00:04:17.246 }, 00:04:17.246 { 00:04:17.246 "nbd_device": "/dev/nbd1", 00:04:17.246 "bdev_name": "Malloc1" 00:04:17.246 } 00:04:17.246 ]' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.246 /dev/nbd1' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.246 /dev/nbd1' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.246 256+0 records in 00:04:17.246 256+0 records out 00:04:17.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106251 s, 98.7 MB/s 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.246 256+0 records in 00:04:17.246 256+0 records out 00:04:17.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132661 s, 79.0 MB/s 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.246 256+0 records in 00:04:17.246 256+0 records out 00:04:17.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147794 s, 70.9 MB/s 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.246 18:41:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.247 18:41:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.505 18:41:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.764 18:41:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.764 18:41:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:17.764 18:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:17.764 18:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:18.023 18:41:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:18.023 18:41:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.023 18:41:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:18.281 [2024-11-20 18:41:40.474745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.281 [2024-11-20 18:41:40.516224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.281 [2024-11-20 18:41:40.516226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.281 [2024-11-20 18:41:40.557986] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:18.281 [2024-11-20 18:41:40.558024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.562 18:41:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:21.562 18:41:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:21.562 spdk_app_start Round 2 00:04:21.562 18:41:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3455611 /var/tmp/spdk-nbd.sock 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3455611 ']' 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.562 18:41:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:21.562 18:41:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.562 Malloc0 00:04:21.563 18:41:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.822 Malloc1 00:04:21.822 18:41:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.822 18:41:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:21.822 /dev/nbd0 00:04:21.822 18:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:21.822 18:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:21.822 18:41:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.822 1+0 records in 00:04:21.822 1+0 records out 00:04:21.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200525 s, 20.4 MB/s 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:22.080 18:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.080 18:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.080 18:41:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:22.080 /dev/nbd1 00:04:22.080 18:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:22.080 18:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:22.080 18:41:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.081 1+0 records in 00:04:22.081 1+0 records out 00:04:22.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256671 s, 16.0 MB/s 00:04:22.081 18:41:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.081 18:41:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:22.081 18:41:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.081 18:41:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:22.081 18:41:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:22.081 18:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.081 18:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.081 18:41:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.081 18:41:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.081 18:41:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:22.339 { 00:04:22.339 "nbd_device": "/dev/nbd0", 00:04:22.339 "bdev_name": "Malloc0" 00:04:22.339 }, 00:04:22.339 { 00:04:22.339 "nbd_device": "/dev/nbd1", 00:04:22.339 "bdev_name": "Malloc1" 00:04:22.339 } 00:04:22.339 ]' 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:22.339 { 00:04:22.339 "nbd_device": "/dev/nbd0", 00:04:22.339 "bdev_name": "Malloc0" 00:04:22.339 }, 00:04:22.339 { 00:04:22.339 "nbd_device": "/dev/nbd1", 00:04:22.339 "bdev_name": "Malloc1" 00:04:22.339 } 00:04:22.339 ]' 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:22.339 /dev/nbd1' 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:22.339 /dev/nbd1' 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:22.339 18:41:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:22.597 256+0 records in 00:04:22.597 256+0 records out 00:04:22.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105014 s, 99.9 MB/s 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:22.597 256+0 records in 00:04:22.597 256+0 records out 00:04:22.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138557 s, 75.7 MB/s 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:22.597 256+0 records in 00:04:22.597 256+0 records out 00:04:22.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144849 s, 72.4 MB/s 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.597 18:41:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.856 18:41:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.856 18:41:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:23.114 18:41:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:23.114 18:41:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:23.371 18:41:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:23.630 [2024-11-20 18:41:45.754826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.630 [2024-11-20 18:41:45.792194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.630 [2024-11-20 18:41:45.792195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.630 [2024-11-20 18:41:45.832928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:23.630 [2024-11-20 18:41:45.832965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:26.914 18:41:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3455611 /var/tmp/spdk-nbd.sock 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3455611 ']' 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:26.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:26.915 18:41:48 event.app_repeat -- event/event.sh@39 -- # killprocess 3455611 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3455611 ']' 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3455611 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455611 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455611' 00:04:26.915 killing process with pid 3455611 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3455611 00:04:26.915 18:41:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3455611 00:04:26.915 spdk_app_start is called in Round 0. 00:04:26.915 Shutdown signal received, stop current app iteration 00:04:26.915 Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 reinitialization... 00:04:26.915 spdk_app_start is called in Round 1. 00:04:26.915 Shutdown signal received, stop current app iteration 00:04:26.915 Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 reinitialization... 00:04:26.915 spdk_app_start is called in Round 2. 00:04:26.915 Shutdown signal received, stop current app iteration 00:04:26.915 Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 reinitialization... 00:04:26.915 spdk_app_start is called in Round 3. 00:04:26.915 Shutdown signal received, stop current app iteration 00:04:26.915 18:41:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:26.915 18:41:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:26.915 00:04:26.915 real 0m16.355s 00:04:26.915 user 0m35.911s 00:04:26.915 sys 0m2.490s 00:04:26.915 18:41:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.915 18:41:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.915 ************************************ 00:04:26.915 END TEST app_repeat 00:04:26.915 ************************************ 00:04:26.915 18:41:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:26.915 18:41:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:26.915 18:41:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.915 18:41:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.915 18:41:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.915 ************************************ 00:04:26.915 START TEST cpu_locks 00:04:26.915 ************************************ 00:04:26.915 18:41:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:26.915 * Looking for test storage... 00:04:26.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:26.915 18:41:49 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.915 18:41:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.915 18:41:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.174 18:41:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.174 18:41:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.175 --rc genhtml_branch_coverage=1 00:04:27.175 --rc genhtml_function_coverage=1 00:04:27.175 --rc genhtml_legend=1 00:04:27.175 --rc geninfo_all_blocks=1 00:04:27.175 --rc geninfo_unexecuted_blocks=1 00:04:27.175 00:04:27.175 ' 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.175 --rc genhtml_branch_coverage=1 00:04:27.175 --rc genhtml_function_coverage=1 00:04:27.175 --rc genhtml_legend=1 00:04:27.175 --rc geninfo_all_blocks=1 00:04:27.175 --rc geninfo_unexecuted_blocks=1 00:04:27.175 00:04:27.175 ' 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.175 --rc genhtml_branch_coverage=1 00:04:27.175 --rc genhtml_function_coverage=1 00:04:27.175 --rc genhtml_legend=1 00:04:27.175 --rc geninfo_all_blocks=1 00:04:27.175 --rc geninfo_unexecuted_blocks=1 00:04:27.175 00:04:27.175 ' 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.175 --rc genhtml_branch_coverage=1 00:04:27.175 --rc genhtml_function_coverage=1 00:04:27.175 --rc genhtml_legend=1 00:04:27.175 --rc geninfo_all_blocks=1 00:04:27.175 --rc geninfo_unexecuted_blocks=1 00:04:27.175 00:04:27.175 ' 00:04:27.175 18:41:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:27.175 18:41:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:27.175 18:41:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:27.175 18:41:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.175 18:41:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:27.175 ************************************ 00:04:27.175 START TEST default_locks 00:04:27.175 ************************************ 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3458609 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3458609 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3458609 ']' 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.175 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:27.175 [2024-11-20 18:41:49.342699] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:27.175 [2024-11-20 18:41:49.342740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458609 ] 00:04:27.175 [2024-11-20 18:41:49.417472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.175 [2024-11-20 18:41:49.456577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.434 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.434 18:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:27.434 18:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3458609 00:04:27.434 18:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3458609 00:04:27.434 18:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:28.002 lslocks: write error 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3458609 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3458609 ']' 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3458609 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3458609 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3458609' 00:04:28.002 killing process with pid 3458609 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3458609 00:04:28.002 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3458609 00:04:28.261 18:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3458609 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3458609 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3458609 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3458609 ']' 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3458609) - No such process 00:04:28.262 ERROR: process (pid: 3458609) is no longer running 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:28.262 00:04:28.262 real 0m1.245s 00:04:28.262 user 0m1.204s 00:04:28.262 sys 0m0.551s 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.262 18:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.262 ************************************ 00:04:28.262 END TEST default_locks 00:04:28.262 ************************************ 00:04:28.262 18:41:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:28.262 18:41:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.262 18:41:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.262 18:41:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.521 ************************************ 00:04:28.521 START TEST default_locks_via_rpc 00:04:28.521 ************************************ 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3458865 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3458865 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3458865 ']' 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.521 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.521 [2024-11-20 18:41:50.658673] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:28.521 [2024-11-20 18:41:50.658717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458865 ] 00:04:28.521 [2024-11-20 18:41:50.734184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.521 [2024-11-20 18:41:50.770806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.781 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.781 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:28.781 18:41:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:28.781 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.781 18:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3458865 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3458865 00:04:28.781 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3458865 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3458865 ']' 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3458865 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3458865 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3458865' 00:04:29.042 killing process with pid 3458865 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3458865 00:04:29.042 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3458865 00:04:29.612 00:04:29.612 real 0m1.042s 00:04:29.612 user 0m0.993s 00:04:29.612 sys 0m0.476s 00:04:29.612 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.612 18:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.612 ************************************ 00:04:29.612 END TEST default_locks_via_rpc 00:04:29.612 ************************************ 00:04:29.612 18:41:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:29.612 18:41:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.612 18:41:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.612 18:41:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.612 ************************************ 00:04:29.612 START TEST non_locking_app_on_locked_coremask 00:04:29.612 ************************************ 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3459119 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3459119 /var/tmp/spdk.sock 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3459119 ']' 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.612 18:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.612 [2024-11-20 18:41:51.770653] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:29.612 [2024-11-20 18:41:51.770695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459119 ] 00:04:29.612 [2024-11-20 18:41:51.845721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.612 [2024-11-20 18:41:51.887334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3459130 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3459130 /var/tmp/spdk2.sock 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3459130 ']' 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:29.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.872 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.872 [2024-11-20 18:41:52.150513] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:29.872 [2024-11-20 18:41:52.150560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459130 ] 00:04:30.132 [2024-11-20 18:41:52.234560] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:30.132 [2024-11-20 18:41:52.234581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.132 [2024-11-20 18:41:52.315369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.700 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.700 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:30.700 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3459119 00:04:30.700 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3459119 00:04:30.700 18:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.286 lslocks: write error 00:04:31.286 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3459119 00:04:31.286 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3459119 ']' 00:04:31.286 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3459119 00:04:31.286 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:31.286 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.286 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459119 00:04:31.546 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.546 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.546 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459119' 00:04:31.546 killing process with pid 3459119 00:04:31.546 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3459119 00:04:31.546 18:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3459119 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3459130 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3459130 ']' 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3459130 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459130 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459130' 00:04:32.114 killing process with pid 3459130 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3459130 00:04:32.114 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3459130 00:04:32.374 00:04:32.374 real 0m2.843s 00:04:32.374 user 0m2.997s 00:04:32.374 sys 0m0.944s 00:04:32.374 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.374 18:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.374 ************************************ 00:04:32.374 END TEST non_locking_app_on_locked_coremask 00:04:32.374 ************************************ 00:04:32.374 18:41:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:32.374 18:41:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.374 18:41:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.374 18:41:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.374 ************************************ 00:04:32.374 START TEST locking_app_on_unlocked_coremask 00:04:32.374 ************************************ 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3459624 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3459624 /var/tmp/spdk.sock 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3459624 ']' 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.374 18:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.374 [2024-11-20 18:41:54.686671] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:32.374 [2024-11-20 18:41:54.686710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459624 ] 00:04:32.633 [2024-11-20 18:41:54.762037] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.633 [2024-11-20 18:41:54.762062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.633 [2024-11-20 18:41:54.803915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3459636 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3459636 /var/tmp/spdk2.sock 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3459636 ']' 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.893 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.893 [2024-11-20 18:41:55.072669] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:32.893 [2024-11-20 18:41:55.072710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459636 ] 00:04:32.893 [2024-11-20 18:41:55.163952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.152 [2024-11-20 18:41:55.253143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.720 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.720 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:33.720 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3459636 00:04:33.720 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3459636 00:04:33.720 18:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.288 lslocks: write error 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3459624 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3459624 ']' 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3459624 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459624 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459624' 00:04:34.288 killing process with pid 3459624 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3459624 00:04:34.288 18:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3459624 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3459636 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3459636 ']' 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3459636 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459636 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459636' 00:04:35.225 killing process with pid 3459636 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3459636 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3459636 00:04:35.225 00:04:35.225 real 0m2.904s 00:04:35.225 user 0m3.044s 00:04:35.225 sys 0m0.960s 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.225 18:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.225 ************************************ 00:04:35.225 END TEST locking_app_on_unlocked_coremask 00:04:35.225 ************************************ 00:04:35.484 18:41:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:35.484 18:41:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.484 18:41:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.484 18:41:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.484 ************************************ 00:04:35.484 START TEST locking_app_on_locked_coremask 00:04:35.484 ************************************ 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3460122 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3460122 /var/tmp/spdk.sock 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3460122 ']' 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.484 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.484 [2024-11-20 18:41:57.663704] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:35.484 [2024-11-20 18:41:57.663747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460122 ] 00:04:35.484 [2024-11-20 18:41:57.736813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.484 [2024-11-20 18:41:57.776175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.743 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3460136 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3460136 /var/tmp/spdk2.sock 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3460136 /var/tmp/spdk2.sock 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:35.744 18:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.744 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3460136 /var/tmp/spdk2.sock 00:04:35.744 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3460136 ']' 00:04:35.744 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.744 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.744 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.744 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.744 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.744 [2024-11-20 18:41:58.048682] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:35.744 [2024-11-20 18:41:58.048723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460136 ] 00:04:36.003 [2024-11-20 18:41:58.138895] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3460122 has claimed it. 00:04:36.003 [2024-11-20 18:41:58.138930] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3460136) - No such process 00:04:36.571 ERROR: process (pid: 3460136) is no longer running 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3460122 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3460122 00:04:36.571 18:41:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.138 lslocks: write error 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3460122 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3460122 ']' 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3460122 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3460122 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3460122' 00:04:37.138 killing process with pid 3460122 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3460122 00:04:37.138 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3460122 00:04:37.397 00:04:37.397 real 0m2.060s 00:04:37.397 user 0m2.193s 00:04:37.397 sys 0m0.690s 00:04:37.397 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.397 18:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.397 ************************************ 00:04:37.397 END TEST locking_app_on_locked_coremask 00:04:37.397 ************************************ 00:04:37.397 18:41:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:37.397 18:41:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.397 18:41:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.397 18:41:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.655 ************************************ 00:04:37.655 START TEST locking_overlapped_coremask 00:04:37.656 ************************************ 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3460565 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3460565 /var/tmp/spdk.sock 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3460565 ']' 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.656 18:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.656 [2024-11-20 18:41:59.793323] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:37.656 [2024-11-20 18:41:59.793367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460565 ] 00:04:37.656 [2024-11-20 18:41:59.868617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:37.656 [2024-11-20 18:41:59.911010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.656 [2024-11-20 18:41:59.911117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.656 [2024-11-20 18:41:59.911117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3460672 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3460672 /var/tmp/spdk2.sock 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3460672 /var/tmp/spdk2.sock 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3460672 /var/tmp/spdk2.sock 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3460672 ']' 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.591 18:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.591 [2024-11-20 18:42:00.669601] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:38.592 [2024-11-20 18:42:00.669653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460672 ] 00:04:38.592 [2024-11-20 18:42:00.763215] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3460565 has claimed it. 00:04:38.592 [2024-11-20 18:42:00.763260] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3460672) - No such process 00:04:39.159 ERROR: process (pid: 3460672) is no longer running 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3460565 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3460565 ']' 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3460565 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3460565 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3460565' 00:04:39.159 killing process with pid 3460565 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3460565 00:04:39.159 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3460565 00:04:39.418 00:04:39.418 real 0m1.948s 00:04:39.418 user 0m5.611s 00:04:39.418 sys 0m0.431s 00:04:39.418 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.418 18:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.418 ************************************ 00:04:39.418 END TEST locking_overlapped_coremask 00:04:39.418 ************************************ 00:04:39.418 18:42:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:39.418 18:42:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.418 18:42:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.418 18:42:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.676 ************************************ 00:04:39.676 START TEST locking_overlapped_coremask_via_rpc 00:04:39.676 ************************************ 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3461003 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3461003 /var/tmp/spdk.sock 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3461003 ']' 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.676 18:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.676 [2024-11-20 18:42:01.806109] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:39.676 [2024-11-20 18:42:01.806151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461003 ] 00:04:39.676 [2024-11-20 18:42:01.881228] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.676 [2024-11-20 18:42:01.881257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:39.676 [2024-11-20 18:42:01.923646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.676 [2024-11-20 18:42:01.923761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.676 [2024-11-20 18:42:01.923762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3461026 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3461026 /var/tmp/spdk2.sock 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3461026 ']' 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.935 18:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.935 [2024-11-20 18:42:02.191000] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:39.935 [2024-11-20 18:42:02.191047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461026 ] 00:04:40.193 [2024-11-20 18:42:02.284119] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.193 [2024-11-20 18:42:02.284151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.193 [2024-11-20 18:42:02.371293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.193 [2024-11-20 18:42:02.371406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.193 [2024-11-20 18:42:02.371407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.758 [2024-11-20 18:42:03.049281] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3461003 has claimed it. 00:04:40.758 request: 00:04:40.758 { 00:04:40.758 "method": "framework_enable_cpumask_locks", 00:04:40.758 "req_id": 1 00:04:40.758 } 00:04:40.758 Got JSON-RPC error response 00:04:40.758 response: 00:04:40.758 { 00:04:40.758 "code": -32603, 00:04:40.758 "message": "Failed to claim CPU core: 2" 00:04:40.758 } 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3461003 /var/tmp/spdk.sock 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3461003 ']' 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.758 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3461026 /var/tmp/spdk2.sock 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3461026 ']' 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.015 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.273 00:04:41.273 real 0m1.705s 00:04:41.273 user 0m0.824s 00:04:41.273 sys 0m0.133s 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.273 18:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.273 ************************************ 00:04:41.273 END TEST locking_overlapped_coremask_via_rpc 00:04:41.273 ************************************ 00:04:41.273 18:42:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:41.273 18:42:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3461003 ]] 00:04:41.273 18:42:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3461003 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3461003 ']' 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3461003 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461003 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461003' 00:04:41.273 killing process with pid 3461003 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3461003 00:04:41.273 18:42:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3461003 00:04:41.839 18:42:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3461026 ]] 00:04:41.839 18:42:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3461026 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3461026 ']' 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3461026 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461026 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461026' 00:04:41.839 killing process with pid 3461026 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3461026 00:04:41.839 18:42:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3461026 00:04:42.099 18:42:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.099 18:42:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:42.099 18:42:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3461003 ]] 00:04:42.099 18:42:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3461003 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3461003 ']' 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3461003 00:04:42.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3461003) - No such process 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3461003 is not found' 00:04:42.099 Process with pid 3461003 is not found 00:04:42.099 18:42:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3461026 ]] 00:04:42.099 18:42:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3461026 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3461026 ']' 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3461026 00:04:42.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3461026) - No such process 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3461026 is not found' 00:04:42.099 Process with pid 3461026 is not found 00:04:42.099 18:42:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.099 00:04:42.099 real 0m15.143s 00:04:42.099 user 0m26.619s 00:04:42.099 sys 0m5.143s 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.099 18:42:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 ************************************ 00:04:42.099 END TEST cpu_locks 00:04:42.099 ************************************ 00:04:42.099 00:04:42.099 real 0m39.755s 00:04:42.099 user 1m15.665s 00:04:42.099 sys 0m8.594s 00:04:42.099 18:42:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.099 18:42:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 ************************************ 00:04:42.099 END TEST event 00:04:42.099 ************************************ 00:04:42.099 18:42:04 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.099 18:42:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.099 18:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.099 18:42:04 -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 ************************************ 00:04:42.099 START TEST thread 00:04:42.099 ************************************ 00:04:42.099 18:42:04 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.099 * Looking for test storage... 00:04:42.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.359 18:42:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.359 18:42:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.359 18:42:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.359 18:42:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.359 18:42:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.359 18:42:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.359 18:42:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.359 18:42:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.359 18:42:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.359 18:42:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.359 18:42:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.359 18:42:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:42.359 18:42:04 thread -- scripts/common.sh@345 -- # : 1 00:04:42.359 18:42:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.359 18:42:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.359 18:42:04 thread -- scripts/common.sh@365 -- # decimal 1 00:04:42.359 18:42:04 thread -- scripts/common.sh@353 -- # local d=1 00:04:42.359 18:42:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.359 18:42:04 thread -- scripts/common.sh@355 -- # echo 1 00:04:42.359 18:42:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.359 18:42:04 thread -- scripts/common.sh@366 -- # decimal 2 00:04:42.359 18:42:04 thread -- scripts/common.sh@353 -- # local d=2 00:04:42.359 18:42:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.359 18:42:04 thread -- scripts/common.sh@355 -- # echo 2 00:04:42.359 18:42:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.359 18:42:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.359 18:42:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.359 18:42:04 thread -- scripts/common.sh@368 -- # return 0 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.359 --rc genhtml_branch_coverage=1 00:04:42.359 --rc genhtml_function_coverage=1 00:04:42.359 --rc genhtml_legend=1 00:04:42.359 --rc geninfo_all_blocks=1 00:04:42.359 --rc geninfo_unexecuted_blocks=1 00:04:42.359 00:04:42.359 ' 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.359 --rc genhtml_branch_coverage=1 00:04:42.359 --rc genhtml_function_coverage=1 00:04:42.359 --rc genhtml_legend=1 00:04:42.359 --rc geninfo_all_blocks=1 00:04:42.359 --rc geninfo_unexecuted_blocks=1 00:04:42.359 00:04:42.359 ' 00:04:42.359 18:42:04 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.359 --rc genhtml_branch_coverage=1 00:04:42.359 --rc genhtml_function_coverage=1 00:04:42.359 --rc genhtml_legend=1 00:04:42.359 --rc geninfo_all_blocks=1 00:04:42.359 --rc geninfo_unexecuted_blocks=1 00:04:42.360 00:04:42.360 ' 00:04:42.360 18:42:04 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.360 --rc genhtml_branch_coverage=1 00:04:42.360 --rc genhtml_function_coverage=1 00:04:42.360 --rc genhtml_legend=1 00:04:42.360 --rc geninfo_all_blocks=1 00:04:42.360 --rc geninfo_unexecuted_blocks=1 00:04:42.360 00:04:42.360 ' 00:04:42.360 18:42:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.360 18:42:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:42.360 18:42:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.360 18:42:04 thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.360 ************************************ 00:04:42.360 START TEST thread_poller_perf 00:04:42.360 ************************************ 00:04:42.360 18:42:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.360 [2024-11-20 18:42:04.562934] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:42.360 [2024-11-20 18:42:04.563001] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461585 ] 00:04:42.360 [2024-11-20 18:42:04.639804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.360 [2024-11-20 18:42:04.679133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.360 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:43.737 [2024-11-20T17:42:06.062Z] ====================================== 00:04:43.737 [2024-11-20T17:42:06.062Z] busy:2105634042 (cyc) 00:04:43.737 [2024-11-20T17:42:06.062Z] total_run_count: 399000 00:04:43.737 [2024-11-20T17:42:06.062Z] tsc_hz: 2100000000 (cyc) 00:04:43.737 [2024-11-20T17:42:06.062Z] ====================================== 00:04:43.737 [2024-11-20T17:42:06.062Z] poller_cost: 5277 (cyc), 2512 (nsec) 00:04:43.737 00:04:43.737 real 0m1.184s 00:04:43.737 user 0m1.105s 00:04:43.737 sys 0m0.074s 00:04:43.737 18:42:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.737 18:42:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.737 ************************************ 00:04:43.737 END TEST thread_poller_perf 00:04:43.737 ************************************ 00:04:43.737 18:42:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:43.737 18:42:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:43.737 18:42:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.737 18:42:05 thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.737 ************************************ 00:04:43.737 START TEST thread_poller_perf 00:04:43.737 ************************************ 00:04:43.737 18:42:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:43.737 [2024-11-20 18:42:05.818334] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:43.737 [2024-11-20 18:42:05.818403] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462069 ] 00:04:43.737 [2024-11-20 18:42:05.896829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.737 [2024-11-20 18:42:05.937790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.737 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:44.673 [2024-11-20T17:42:06.998Z] ====================================== 00:04:44.673 [2024-11-20T17:42:06.998Z] busy:2101545348 (cyc) 00:04:44.673 [2024-11-20T17:42:06.998Z] total_run_count: 5283000 00:04:44.673 [2024-11-20T17:42:06.998Z] tsc_hz: 2100000000 (cyc) 00:04:44.673 [2024-11-20T17:42:06.998Z] ====================================== 00:04:44.673 [2024-11-20T17:42:06.998Z] poller_cost: 397 (cyc), 189 (nsec) 00:04:44.673 00:04:44.673 real 0m1.184s 00:04:44.673 user 0m1.101s 00:04:44.673 sys 0m0.077s 00:04:44.674 18:42:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.674 18:42:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.674 ************************************ 00:04:44.674 END TEST thread_poller_perf 00:04:44.674 ************************************ 00:04:44.934 18:42:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:44.934 00:04:44.934 real 0m2.679s 00:04:44.934 user 0m2.362s 00:04:44.934 sys 0m0.331s 00:04:44.934 18:42:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.934 18:42:07 thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.934 ************************************ 00:04:44.934 END TEST thread 00:04:44.934 ************************************ 00:04:44.934 18:42:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:44.934 18:42:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:44.934 18:42:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.934 18:42:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.934 18:42:07 -- common/autotest_common.sh@10 -- # set +x 00:04:44.934 ************************************ 00:04:44.934 START TEST app_cmdline 00:04:44.934 ************************************ 00:04:44.934 18:42:07 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:44.934 * Looking for test storage... 00:04:44.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:44.934 18:42:07 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.934 18:42:07 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.934 18:42:07 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.934 18:42:07 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.934 18:42:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:45.193 18:42:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:45.193 18:42:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.193 18:42:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:45.193 18:42:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.193 18:42:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.193 18:42:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.193 18:42:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:45.193 18:42:07 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.193 18:42:07 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.193 --rc genhtml_branch_coverage=1 00:04:45.193 --rc genhtml_function_coverage=1 00:04:45.193 --rc genhtml_legend=1 00:04:45.193 --rc geninfo_all_blocks=1 00:04:45.193 --rc geninfo_unexecuted_blocks=1 00:04:45.193 00:04:45.193 ' 00:04:45.193 18:42:07 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.193 --rc genhtml_branch_coverage=1 00:04:45.193 --rc genhtml_function_coverage=1 00:04:45.193 --rc genhtml_legend=1 00:04:45.193 --rc geninfo_all_blocks=1 00:04:45.193 --rc geninfo_unexecuted_blocks=1 00:04:45.193 00:04:45.193 ' 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.194 --rc genhtml_branch_coverage=1 00:04:45.194 --rc genhtml_function_coverage=1 00:04:45.194 --rc genhtml_legend=1 00:04:45.194 --rc geninfo_all_blocks=1 00:04:45.194 --rc geninfo_unexecuted_blocks=1 00:04:45.194 00:04:45.194 ' 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.194 --rc genhtml_branch_coverage=1 00:04:45.194 --rc genhtml_function_coverage=1 00:04:45.194 --rc genhtml_legend=1 00:04:45.194 --rc geninfo_all_blocks=1 00:04:45.194 --rc geninfo_unexecuted_blocks=1 00:04:45.194 00:04:45.194 ' 00:04:45.194 18:42:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:45.194 18:42:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3462519 00:04:45.194 18:42:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3462519 00:04:45.194 18:42:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3462519 ']' 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.194 18:42:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:45.194 [2024-11-20 18:42:07.314790] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:45.194 [2024-11-20 18:42:07.314859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462519 ] 00:04:45.194 [2024-11-20 18:42:07.391525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.194 [2024-11-20 18:42:07.431220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.453 18:42:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.453 18:42:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:45.453 18:42:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:45.712 { 00:04:45.712 "version": "SPDK v25.01-pre git sha1 bd9804982", 00:04:45.712 "fields": { 00:04:45.712 "major": 25, 00:04:45.712 "minor": 1, 00:04:45.712 "patch": 0, 00:04:45.712 "suffix": "-pre", 00:04:45.712 "commit": "bd9804982" 00:04:45.712 } 00:04:45.712 } 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:45.712 18:42:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:45.712 18:42:07 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:45.971 request: 00:04:45.971 { 00:04:45.971 "method": "env_dpdk_get_mem_stats", 00:04:45.971 "req_id": 1 00:04:45.971 } 00:04:45.971 Got JSON-RPC error response 00:04:45.971 response: 00:04:45.971 { 00:04:45.971 "code": -32601, 00:04:45.971 "message": "Method not found" 00:04:45.971 } 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.971 18:42:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3462519 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3462519 ']' 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3462519 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3462519 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3462519' 00:04:45.971 killing process with pid 3462519 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 3462519 00:04:45.971 18:42:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 3462519 00:04:46.230 00:04:46.230 real 0m1.353s 00:04:46.230 user 0m1.587s 00:04:46.230 sys 0m0.443s 00:04:46.230 18:42:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.230 18:42:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:46.230 ************************************ 00:04:46.230 END TEST app_cmdline 00:04:46.230 ************************************ 00:04:46.230 18:42:08 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:46.230 18:42:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.230 18:42:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.230 18:42:08 -- common/autotest_common.sh@10 -- # set +x 00:04:46.230 ************************************ 00:04:46.230 START TEST version 00:04:46.230 ************************************ 00:04:46.230 18:42:08 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:46.491 * Looking for test storage... 00:04:46.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.491 18:42:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.491 18:42:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.491 18:42:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.491 18:42:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.491 18:42:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.491 18:42:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.491 18:42:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.491 18:42:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.491 18:42:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.491 18:42:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.491 18:42:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.491 18:42:08 version -- scripts/common.sh@344 -- # case "$op" in 00:04:46.491 18:42:08 version -- scripts/common.sh@345 -- # : 1 00:04:46.491 18:42:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.491 18:42:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.491 18:42:08 version -- scripts/common.sh@365 -- # decimal 1 00:04:46.491 18:42:08 version -- scripts/common.sh@353 -- # local d=1 00:04:46.491 18:42:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.491 18:42:08 version -- scripts/common.sh@355 -- # echo 1 00:04:46.491 18:42:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.491 18:42:08 version -- scripts/common.sh@366 -- # decimal 2 00:04:46.491 18:42:08 version -- scripts/common.sh@353 -- # local d=2 00:04:46.491 18:42:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.491 18:42:08 version -- scripts/common.sh@355 -- # echo 2 00:04:46.491 18:42:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.491 18:42:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.491 18:42:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.491 18:42:08 version -- scripts/common.sh@368 -- # return 0 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.491 --rc genhtml_branch_coverage=1 00:04:46.491 --rc genhtml_function_coverage=1 00:04:46.491 --rc genhtml_legend=1 00:04:46.491 --rc geninfo_all_blocks=1 00:04:46.491 --rc geninfo_unexecuted_blocks=1 00:04:46.491 00:04:46.491 ' 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.491 --rc genhtml_branch_coverage=1 00:04:46.491 --rc genhtml_function_coverage=1 00:04:46.491 --rc genhtml_legend=1 00:04:46.491 --rc geninfo_all_blocks=1 00:04:46.491 --rc geninfo_unexecuted_blocks=1 00:04:46.491 00:04:46.491 ' 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.491 --rc genhtml_branch_coverage=1 00:04:46.491 --rc genhtml_function_coverage=1 00:04:46.491 --rc genhtml_legend=1 00:04:46.491 --rc geninfo_all_blocks=1 00:04:46.491 --rc geninfo_unexecuted_blocks=1 00:04:46.491 00:04:46.491 ' 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.491 --rc genhtml_branch_coverage=1 00:04:46.491 --rc genhtml_function_coverage=1 00:04:46.491 --rc genhtml_legend=1 00:04:46.491 --rc geninfo_all_blocks=1 00:04:46.491 --rc geninfo_unexecuted_blocks=1 00:04:46.491 00:04:46.491 ' 00:04:46.491 18:42:08 version -- app/version.sh@17 -- # get_header_version major 00:04:46.491 18:42:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # cut -f2 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.491 18:42:08 version -- app/version.sh@17 -- # major=25 00:04:46.491 18:42:08 version -- app/version.sh@18 -- # get_header_version minor 00:04:46.491 18:42:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # cut -f2 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.491 18:42:08 version -- app/version.sh@18 -- # minor=1 00:04:46.491 18:42:08 version -- app/version.sh@19 -- # get_header_version patch 00:04:46.491 18:42:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # cut -f2 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.491 18:42:08 version -- app/version.sh@19 -- # patch=0 00:04:46.491 18:42:08 version -- app/version.sh@20 -- # get_header_version suffix 00:04:46.491 18:42:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # cut -f2 00:04:46.491 18:42:08 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.491 18:42:08 version -- app/version.sh@20 -- # suffix=-pre 00:04:46.491 18:42:08 version -- app/version.sh@22 -- # version=25.1 00:04:46.491 18:42:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:46.491 18:42:08 version -- app/version.sh@28 -- # version=25.1rc0 00:04:46.491 18:42:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:46.491 18:42:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:46.491 18:42:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:46.491 18:42:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:46.491 00:04:46.491 real 0m0.245s 00:04:46.491 user 0m0.160s 00:04:46.491 sys 0m0.128s 00:04:46.491 18:42:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.491 18:42:08 version -- common/autotest_common.sh@10 -- # set +x 00:04:46.491 ************************************ 00:04:46.491 END TEST version 00:04:46.491 ************************************ 00:04:46.491 18:42:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:46.491 18:42:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:46.491 18:42:08 -- spdk/autotest.sh@194 -- # uname -s 00:04:46.492 18:42:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:46.492 18:42:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:46.492 18:42:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:46.492 18:42:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:46.492 18:42:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:46.492 18:42:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:46.492 18:42:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.492 18:42:08 -- common/autotest_common.sh@10 -- # set +x 00:04:46.750 18:42:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:46.750 18:42:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:46.750 18:42:08 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:46.750 18:42:08 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:46.750 18:42:08 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:46.750 18:42:08 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:46.750 18:42:08 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:46.750 18:42:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:46.750 18:42:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.750 18:42:08 -- common/autotest_common.sh@10 -- # set +x 00:04:46.750 ************************************ 00:04:46.750 START TEST nvmf_tcp 00:04:46.750 ************************************ 00:04:46.750 18:42:08 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:46.750 * Looking for test storage... 00:04:46.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:46.750 18:42:08 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.750 18:42:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.750 18:42:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.750 18:42:09 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.750 18:42:09 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.751 18:42:09 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.751 --rc genhtml_branch_coverage=1 00:04:46.751 --rc genhtml_function_coverage=1 00:04:46.751 --rc genhtml_legend=1 00:04:46.751 --rc geninfo_all_blocks=1 00:04:46.751 --rc geninfo_unexecuted_blocks=1 00:04:46.751 00:04:46.751 ' 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.751 --rc genhtml_branch_coverage=1 00:04:46.751 --rc genhtml_function_coverage=1 00:04:46.751 --rc genhtml_legend=1 00:04:46.751 --rc geninfo_all_blocks=1 00:04:46.751 --rc geninfo_unexecuted_blocks=1 00:04:46.751 00:04:46.751 ' 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.751 --rc genhtml_branch_coverage=1 00:04:46.751 --rc genhtml_function_coverage=1 00:04:46.751 --rc genhtml_legend=1 00:04:46.751 --rc geninfo_all_blocks=1 00:04:46.751 --rc geninfo_unexecuted_blocks=1 00:04:46.751 00:04:46.751 ' 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.751 --rc genhtml_branch_coverage=1 00:04:46.751 --rc genhtml_function_coverage=1 00:04:46.751 --rc genhtml_legend=1 00:04:46.751 --rc geninfo_all_blocks=1 00:04:46.751 --rc geninfo_unexecuted_blocks=1 00:04:46.751 00:04:46.751 ' 00:04:46.751 18:42:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:46.751 18:42:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:46.751 18:42:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.751 18:42:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.009 ************************************ 00:04:47.009 START TEST nvmf_target_core 00:04:47.009 ************************************ 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:47.009 * Looking for test storage... 00:04:47.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.009 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.009 --rc genhtml_branch_coverage=1 00:04:47.009 --rc genhtml_function_coverage=1 00:04:47.009 --rc genhtml_legend=1 00:04:47.009 --rc geninfo_all_blocks=1 00:04:47.009 --rc geninfo_unexecuted_blocks=1 00:04:47.009 00:04:47.009 ' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.010 --rc genhtml_branch_coverage=1 00:04:47.010 --rc genhtml_function_coverage=1 00:04:47.010 --rc genhtml_legend=1 00:04:47.010 --rc geninfo_all_blocks=1 00:04:47.010 --rc geninfo_unexecuted_blocks=1 00:04:47.010 00:04:47.010 ' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.010 --rc genhtml_branch_coverage=1 00:04:47.010 --rc genhtml_function_coverage=1 00:04:47.010 --rc genhtml_legend=1 00:04:47.010 --rc geninfo_all_blocks=1 00:04:47.010 --rc geninfo_unexecuted_blocks=1 00:04:47.010 00:04:47.010 ' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.010 --rc genhtml_branch_coverage=1 00:04:47.010 --rc genhtml_function_coverage=1 00:04:47.010 --rc genhtml_legend=1 00:04:47.010 --rc geninfo_all_blocks=1 00:04:47.010 --rc geninfo_unexecuted_blocks=1 00:04:47.010 00:04:47.010 ' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:47.010 ************************************ 00:04:47.010 START TEST nvmf_abort 00:04:47.010 ************************************ 00:04:47.010 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.270 * Looking for test storage... 00:04:47.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.270 --rc genhtml_branch_coverage=1 00:04:47.270 --rc genhtml_function_coverage=1 00:04:47.270 --rc genhtml_legend=1 00:04:47.270 --rc geninfo_all_blocks=1 00:04:47.270 --rc geninfo_unexecuted_blocks=1 00:04:47.270 00:04:47.270 ' 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.270 --rc genhtml_branch_coverage=1 00:04:47.270 --rc genhtml_function_coverage=1 00:04:47.270 --rc genhtml_legend=1 00:04:47.270 --rc geninfo_all_blocks=1 00:04:47.270 --rc geninfo_unexecuted_blocks=1 00:04:47.270 00:04:47.270 ' 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.270 --rc genhtml_branch_coverage=1 00:04:47.270 --rc genhtml_function_coverage=1 00:04:47.270 --rc genhtml_legend=1 00:04:47.270 --rc geninfo_all_blocks=1 00:04:47.270 --rc geninfo_unexecuted_blocks=1 00:04:47.270 00:04:47.270 ' 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.270 --rc genhtml_branch_coverage=1 00:04:47.270 --rc genhtml_function_coverage=1 00:04:47.270 --rc genhtml_legend=1 00:04:47.270 --rc geninfo_all_blocks=1 00:04:47.270 --rc geninfo_unexecuted_blocks=1 00:04:47.270 00:04:47.270 ' 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.270 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:47.271 18:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:53.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:53.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:53.994 Found net devices under 0000:86:00.0: cvl_0_0 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:53.994 Found net devices under 0000:86:00.1: cvl_0_1 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:53.994 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:53.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:53.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:04:53.995 00:04:53.995 --- 10.0.0.2 ping statistics --- 00:04:53.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:53.995 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:53.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:53.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:04:53.995 00:04:53.995 --- 10.0.0.1 ping statistics --- 00:04:53.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:53.995 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3466208 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3466208 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3466208 ']' 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 [2024-11-20 18:42:15.593476] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:04:53.995 [2024-11-20 18:42:15.593524] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:53.995 [2024-11-20 18:42:15.658266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.995 [2024-11-20 18:42:15.702647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:53.995 [2024-11-20 18:42:15.702681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:53.995 [2024-11-20 18:42:15.702689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:53.995 [2024-11-20 18:42:15.702695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:53.995 [2024-11-20 18:42:15.702700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:53.995 [2024-11-20 18:42:15.704069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.995 [2024-11-20 18:42:15.704179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.995 [2024-11-20 18:42:15.704179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 [2024-11-20 18:42:15.849634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 Malloc0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 Delay0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 [2024-11-20 18:42:15.920437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.995 18:42:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:53.995 [2024-11-20 18:42:16.056513] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:55.897 Initializing NVMe Controllers 00:04:55.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:55.897 controller IO queue size 128 less than required 00:04:55.897 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:55.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:55.897 Initialization complete. Launching workers. 00:04:55.897 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36047 00:04:55.897 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36108, failed to submit 62 00:04:55.897 success 36051, unsuccessful 57, failed 0 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:55.897 rmmod nvme_tcp 00:04:55.897 rmmod nvme_fabrics 00:04:55.897 rmmod nvme_keyring 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3466208 ']' 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3466208 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3466208 ']' 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3466208 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466208 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466208' 00:04:55.897 killing process with pid 3466208 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3466208 00:04:55.897 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3466208 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:56.158 18:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:58.695 00:04:58.695 real 0m11.136s 00:04:58.695 user 0m11.535s 00:04:58.695 sys 0m5.334s 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.695 ************************************ 00:04:58.695 END TEST nvmf_abort 00:04:58.695 ************************************ 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:58.695 ************************************ 00:04:58.695 START TEST nvmf_ns_hotplug_stress 00:04:58.695 ************************************ 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:58.695 * Looking for test storage... 00:04:58.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.695 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.696 --rc genhtml_branch_coverage=1 00:04:58.696 --rc genhtml_function_coverage=1 00:04:58.696 --rc genhtml_legend=1 00:04:58.696 --rc geninfo_all_blocks=1 00:04:58.696 --rc geninfo_unexecuted_blocks=1 00:04:58.696 00:04:58.696 ' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.696 --rc genhtml_branch_coverage=1 00:04:58.696 --rc genhtml_function_coverage=1 00:04:58.696 --rc genhtml_legend=1 00:04:58.696 --rc geninfo_all_blocks=1 00:04:58.696 --rc geninfo_unexecuted_blocks=1 00:04:58.696 00:04:58.696 ' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.696 --rc genhtml_branch_coverage=1 00:04:58.696 --rc genhtml_function_coverage=1 00:04:58.696 --rc genhtml_legend=1 00:04:58.696 --rc geninfo_all_blocks=1 00:04:58.696 --rc geninfo_unexecuted_blocks=1 00:04:58.696 00:04:58.696 ' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.696 --rc genhtml_branch_coverage=1 00:04:58.696 --rc genhtml_function_coverage=1 00:04:58.696 --rc genhtml_legend=1 00:04:58.696 --rc geninfo_all_blocks=1 00:04:58.696 --rc geninfo_unexecuted_blocks=1 00:04:58.696 00:04:58.696 ' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.696 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:58.697 18:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:05.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:05.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:05.269 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:05.270 Found net devices under 0000:86:00.0: cvl_0_0 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:05.270 Found net devices under 0000:86:00.1: cvl_0_1 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:05.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:05.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:05:05.270 00:05:05.270 --- 10.0.0.2 ping statistics --- 00:05:05.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:05.270 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:05.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:05.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:05:05.270 00:05:05.270 --- 10.0.0.1 ping statistics --- 00:05:05.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:05.270 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3470237 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3470237 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3470237 ']' 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.270 18:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.270 [2024-11-20 18:42:26.837814] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:05:05.270 [2024-11-20 18:42:26.837858] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:05.270 [2024-11-20 18:42:26.916862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.270 [2024-11-20 18:42:26.958147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:05.270 [2024-11-20 18:42:26.958182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:05.270 [2024-11-20 18:42:26.958189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.270 [2024-11-20 18:42:26.958195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.270 [2024-11-20 18:42:26.958200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:05.270 [2024-11-20 18:42:26.959632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.270 [2024-11-20 18:42:26.959740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.270 [2024-11-20 18:42:26.959741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.270 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.270 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:05.270 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:05.270 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.270 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.270 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:05.271 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:05.271 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:05.271 [2024-11-20 18:42:27.257651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.271 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:05.271 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:05.528 [2024-11-20 18:42:27.679118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:05.528 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:05.786 18:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:05.786 Malloc0 00:05:06.043 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:06.043 Delay0 00:05:06.043 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.301 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:06.558 NULL1 00:05:06.558 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:06.815 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3470632 00:05:06.815 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:06.815 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:06.815 18:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.746 Read completed with error (sct=0, sc=11) 00:05:08.003 18:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.003 18:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:08.003 18:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:08.259 true 00:05:08.259 18:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:08.259 18:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.193 18:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.450 18:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:09.450 18:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:09.450 true 00:05:09.450 18:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:09.450 18:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.708 18:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.966 18:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:09.966 18:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:10.224 true 00:05:10.224 18:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:10.224 18:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.156 18:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.414 18:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:11.414 18:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:11.671 true 00:05:11.671 18:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:11.671 18:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.600 18:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.600 18:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:12.600 18:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:12.857 true 00:05:12.858 18:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:12.858 18:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.858 18:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.115 18:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:13.115 18:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:13.373 true 00:05:13.373 18:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:13.373 18:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.305 18:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.563 18:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:14.563 18:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:14.821 true 00:05:14.821 18:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:14.821 18:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.752 18:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.752 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:15.752 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:16.010 true 00:05:16.010 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:16.010 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.267 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.534 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:16.534 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:16.534 true 00:05:16.534 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:16.534 18:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.909 18:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.909 18:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:17.909 18:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:18.167 true 00:05:18.167 18:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:18.167 18:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.112 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.112 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:19.112 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:19.370 true 00:05:19.370 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:19.370 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.628 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.628 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:19.628 18:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:19.886 true 00:05:19.886 18:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:19.886 18:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.819 18:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.077 18:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:21.077 18:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:21.335 true 00:05:21.335 18:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:21.335 18:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.270 18:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.270 18:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:22.270 18:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:22.528 true 00:05:22.528 18:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:22.528 18:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.786 18:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.786 18:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:22.786 18:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:23.044 true 00:05:23.044 18:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:23.044 18:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.419 18:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.419 18:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:24.419 18:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:24.677 true 00:05:24.677 18:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:24.677 18:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.613 18:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.613 18:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:25.613 18:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:25.871 true 00:05:25.871 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:25.871 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.185 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.486 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:26.486 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:26.486 true 00:05:26.486 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:26.486 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.744 18:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.002 18:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:27.002 18:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:27.260 true 00:05:27.260 18:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:27.260 18:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.194 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.194 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:28.194 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:28.452 true 00:05:28.452 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:28.452 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.710 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.710 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:28.710 18:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:28.968 true 00:05:28.968 18:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:28.968 18:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.344 18:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.344 18:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:30.344 18:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:30.344 true 00:05:30.602 18:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:30.602 18:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.168 18:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.426 18:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:31.426 18:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:31.685 true 00:05:31.685 18:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:31.685 18:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.944 18:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.944 18:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:31.944 18:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:32.202 true 00:05:32.202 18:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:32.202 18:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.577 18:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.577 18:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:33.577 18:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:33.836 true 00:05:33.836 18:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:33.836 18:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.770 18:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.770 18:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:34.770 18:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:35.029 true 00:05:35.030 18:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:35.030 18:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.030 18:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.288 18:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:35.288 18:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:35.546 true 00:05:35.546 18:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:35.546 18:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.759 18:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.759 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:36.759 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:37.023 Initializing NVMe Controllers 00:05:37.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:37.023 Controller IO queue size 128, less than required. 00:05:37.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:37.023 Controller IO queue size 128, less than required. 00:05:37.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:37.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:37.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:37.023 Initialization complete. Launching workers. 00:05:37.023 ======================================================== 00:05:37.023 Latency(us) 00:05:37.023 Device Information : IOPS MiB/s Average min max 00:05:37.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2039.90 1.00 43488.71 2153.50 1020680.34 00:05:37.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18013.65 8.80 7105.72 1511.26 450549.54 00:05:37.023 ======================================================== 00:05:37.023 Total : 20053.55 9.79 10806.70 1511.26 1020680.34 00:05:37.023 00:05:37.023 true 00:05:37.023 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3470632 00:05:37.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3470632) - No such process 00:05:37.023 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3470632 00:05:37.023 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.282 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:37.541 null0 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.541 18:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:37.799 null1 00:05:37.799 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:37.799 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.799 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:38.058 null2 00:05:38.058 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.058 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.058 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:38.376 null3 00:05:38.376 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.376 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.376 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:38.376 null4 00:05:38.376 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.376 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.376 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:38.634 null5 00:05:38.634 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.634 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.634 18:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:38.894 null6 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:38.894 null7 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:38.894 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3476104 3476105 3476106 3476109 3476111 3476113 3476115 3476116 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:39.157 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.158 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.416 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.675 18:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.935 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.195 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.454 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.713 18:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.973 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.232 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.491 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.492 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.751 18:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.009 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.010 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.269 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.528 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.787 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.788 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.788 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.788 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.788 18:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.047 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.047 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.047 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.047 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.047 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.047 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.048 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.048 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.048 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.048 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:43.307 rmmod nvme_tcp 00:05:43.307 rmmod nvme_fabrics 00:05:43.307 rmmod nvme_keyring 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3470237 ']' 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3470237 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3470237 ']' 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3470237 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470237 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470237' 00:05:43.307 killing process with pid 3470237 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3470237 00:05:43.307 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3470237 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.567 18:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.475 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:45.475 00:05:45.475 real 0m47.209s 00:05:45.475 user 3m12.294s 00:05:45.475 sys 0m15.622s 00:05:45.475 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.475 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:45.475 ************************************ 00:05:45.475 END TEST nvmf_ns_hotplug_stress 00:05:45.475 ************************************ 00:05:45.475 18:43:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:45.475 18:43:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.475 18:43:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.475 18:43:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.735 ************************************ 00:05:45.735 START TEST nvmf_delete_subsystem 00:05:45.735 ************************************ 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:45.735 * Looking for test storage... 00:05:45.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.735 --rc genhtml_branch_coverage=1 00:05:45.735 --rc genhtml_function_coverage=1 00:05:45.735 --rc genhtml_legend=1 00:05:45.735 --rc geninfo_all_blocks=1 00:05:45.735 --rc geninfo_unexecuted_blocks=1 00:05:45.735 00:05:45.735 ' 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.735 --rc genhtml_branch_coverage=1 00:05:45.735 --rc genhtml_function_coverage=1 00:05:45.735 --rc genhtml_legend=1 00:05:45.735 --rc geninfo_all_blocks=1 00:05:45.735 --rc geninfo_unexecuted_blocks=1 00:05:45.735 00:05:45.735 ' 00:05:45.735 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.735 --rc genhtml_branch_coverage=1 00:05:45.735 --rc genhtml_function_coverage=1 00:05:45.735 --rc genhtml_legend=1 00:05:45.735 --rc geninfo_all_blocks=1 00:05:45.735 --rc geninfo_unexecuted_blocks=1 00:05:45.736 00:05:45.736 ' 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.736 --rc genhtml_branch_coverage=1 00:05:45.736 --rc genhtml_function_coverage=1 00:05:45.736 --rc genhtml_legend=1 00:05:45.736 --rc geninfo_all_blocks=1 00:05:45.736 --rc geninfo_unexecuted_blocks=1 00:05:45.736 00:05:45.736 ' 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.736 18:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.736 18:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.316 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:52.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:52.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:52.317 Found net devices under 0000:86:00.0: cvl_0_0 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:52.317 Found net devices under 0000:86:00.1: cvl_0_1 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.317 18:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:05:52.317 00:05:52.317 --- 10.0.0.2 ping statistics --- 00:05:52.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.317 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:05:52.317 00:05:52.317 --- 10.0.0.1 ping statistics --- 00:05:52.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.317 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3480562 00:05:52.317 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3480562 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3480562 ']' 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 [2024-11-20 18:43:14.124547] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:05:52.318 [2024-11-20 18:43:14.124587] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.318 [2024-11-20 18:43:14.203222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.318 [2024-11-20 18:43:14.244830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.318 [2024-11-20 18:43:14.244866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.318 [2024-11-20 18:43:14.244873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.318 [2024-11-20 18:43:14.244879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.318 [2024-11-20 18:43:14.244885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.318 [2024-11-20 18:43:14.246102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.318 [2024-11-20 18:43:14.246101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 [2024-11-20 18:43:14.383182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 [2024-11-20 18:43:14.403400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 NULL1 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 Delay0 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3480740 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:52.318 18:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:52.318 [2024-11-20 18:43:14.515166] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:54.217 18:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:54.217 18:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.217 18:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 starting I/O failed: -6 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 [2024-11-20 18:43:16.721108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e2c0 is same with the state(6) to be set 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Write completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.475 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 [2024-11-20 18:43:16.721685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e4a0 is same with the state(6) to be set 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Write completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 Read completed with error (sct=0, sc=8) 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:54.476 starting I/O failed: -6 00:05:55.410 [2024-11-20 18:43:17.693073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57f9a0 is same with the state(6) to be set 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 [2024-11-20 18:43:17.724332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e680 is same with the state(6) to be set 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 [2024-11-20 18:43:17.724702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e860 is same with the state(6) to be set 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.410 Write completed with error (sct=0, sc=8) 00:05:55.410 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 [2024-11-20 18:43:17.725146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f203c00d680 is same with the state(6) to be set 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Read completed with error (sct=0, sc=8) 00:05:55.411 Write completed with error (sct=0, sc=8) 00:05:55.411 [2024-11-20 18:43:17.726762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f203c000c40 is same with the state(6) to be set 00:05:55.411 Initializing NVMe Controllers 00:05:55.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:55.411 Controller IO queue size 128, less than required. 00:05:55.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:55.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:55.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:55.411 Initialization complete. Launching workers. 00:05:55.411 ======================================================== 00:05:55.411 Latency(us) 00:05:55.411 Device Information : IOPS MiB/s Average min max 00:05:55.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.16 0.08 888084.63 576.86 1041908.61 00:05:55.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.17 0.08 989276.24 276.67 2000907.05 00:05:55.411 ======================================================== 00:05:55.411 Total : 342.33 0.17 938386.27 276.67 2000907.05 00:05:55.411 00:05:55.411 [2024-11-20 18:43:17.727296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57f9a0 (9): Bad file descriptor 00:05:55.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:55.411 18:43:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.411 18:43:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:55.411 18:43:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3480740 00:05:55.411 18:43:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3480740 00:05:55.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3480740) - No such process 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3480740 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3480740 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3480740 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.977 [2024-11-20 18:43:18.252366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3481310 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:55.977 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.235 [2024-11-20 18:43:18.336715] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:56.493 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.493 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:56.493 18:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.059 18:43:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.059 18:43:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:57.059 18:43:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.623 18:43:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.623 18:43:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:57.623 18:43:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.187 18:43:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.187 18:43:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:58.187 18:43:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.753 18:43:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.753 18:43:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:58.753 18:43:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.010 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.010 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:59.010 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.267 Initializing NVMe Controllers 00:05:59.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:59.267 Controller IO queue size 128, less than required. 00:05:59.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:59.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:59.267 Initialization complete. Launching workers. 00:05:59.267 ======================================================== 00:05:59.267 Latency(us) 00:05:59.267 Device Information : IOPS MiB/s Average min max 00:05:59.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002782.84 1000125.76 1043328.65 00:05:59.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003825.57 1000131.05 1040707.04 00:05:59.268 ======================================================== 00:05:59.268 Total : 256.00 0.12 1003304.21 1000125.76 1043328.65 00:05:59.268 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3481310 00:05:59.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3481310) - No such process 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3481310 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:59.525 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:59.525 rmmod nvme_tcp 00:05:59.525 rmmod nvme_fabrics 00:05:59.525 rmmod nvme_keyring 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3480562 ']' 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3480562 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3480562 ']' 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3480562 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3480562 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3480562' 00:05:59.785 killing process with pid 3480562 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3480562 00:05:59.785 18:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3480562 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.785 18:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:02.322 00:06:02.322 real 0m16.342s 00:06:02.322 user 0m29.411s 00:06:02.322 sys 0m5.597s 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.322 ************************************ 00:06:02.322 END TEST nvmf_delete_subsystem 00:06:02.322 ************************************ 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:02.322 ************************************ 00:06:02.322 START TEST nvmf_host_management 00:06:02.322 ************************************ 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:02.322 * Looking for test storage... 00:06:02.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.322 --rc genhtml_branch_coverage=1 00:06:02.322 --rc genhtml_function_coverage=1 00:06:02.322 --rc genhtml_legend=1 00:06:02.322 --rc geninfo_all_blocks=1 00:06:02.322 --rc geninfo_unexecuted_blocks=1 00:06:02.322 00:06:02.322 ' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.322 --rc genhtml_branch_coverage=1 00:06:02.322 --rc genhtml_function_coverage=1 00:06:02.322 --rc genhtml_legend=1 00:06:02.322 --rc geninfo_all_blocks=1 00:06:02.322 --rc geninfo_unexecuted_blocks=1 00:06:02.322 00:06:02.322 ' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.322 --rc genhtml_branch_coverage=1 00:06:02.322 --rc genhtml_function_coverage=1 00:06:02.322 --rc genhtml_legend=1 00:06:02.322 --rc geninfo_all_blocks=1 00:06:02.322 --rc geninfo_unexecuted_blocks=1 00:06:02.322 00:06:02.322 ' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.322 --rc genhtml_branch_coverage=1 00:06:02.322 --rc genhtml_function_coverage=1 00:06:02.322 --rc genhtml_legend=1 00:06:02.322 --rc geninfo_all_blocks=1 00:06:02.322 --rc geninfo_unexecuted_blocks=1 00:06:02.322 00:06:02.322 ' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.322 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:02.323 18:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:08.892 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:08.892 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:08.893 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:08.893 Found net devices under 0000:86:00.0: cvl_0_0 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:08.893 Found net devices under 0000:86:00.1: cvl_0_1 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:08.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:06:08.893 00:06:08.893 --- 10.0.0.2 ping statistics --- 00:06:08.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.893 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:08.893 00:06:08.893 --- 10.0.0.1 ping statistics --- 00:06:08.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.893 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3485449 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3485449 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3485449 ']' 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.893 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.893 [2024-11-20 18:43:30.546401] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:06:08.893 [2024-11-20 18:43:30.546452] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.893 [2024-11-20 18:43:30.626007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.893 [2024-11-20 18:43:30.670006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.893 [2024-11-20 18:43:30.670044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.893 [2024-11-20 18:43:30.670051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.893 [2024-11-20 18:43:30.670057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.893 [2024-11-20 18:43:30.670063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.894 [2024-11-20 18:43:30.671611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.894 [2024-11-20 18:43:30.671639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.894 [2024-11-20 18:43:30.671745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.894 [2024-11-20 18:43:30.671746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.894 [2024-11-20 18:43:30.809842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.894 Malloc0 00:06:08.894 [2024-11-20 18:43:30.890324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3485653 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3485653 /var/tmp/bdevperf.sock 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3485653 ']' 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:08.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:08.894 { 00:06:08.894 "params": { 00:06:08.894 "name": "Nvme$subsystem", 00:06:08.894 "trtype": "$TEST_TRANSPORT", 00:06:08.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:08.894 "adrfam": "ipv4", 00:06:08.894 "trsvcid": "$NVMF_PORT", 00:06:08.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:08.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:08.894 "hdgst": ${hdgst:-false}, 00:06:08.894 "ddgst": ${ddgst:-false} 00:06:08.894 }, 00:06:08.894 "method": "bdev_nvme_attach_controller" 00:06:08.894 } 00:06:08.894 EOF 00:06:08.894 )") 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:08.894 18:43:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:08.894 "params": { 00:06:08.894 "name": "Nvme0", 00:06:08.894 "trtype": "tcp", 00:06:08.894 "traddr": "10.0.0.2", 00:06:08.894 "adrfam": "ipv4", 00:06:08.894 "trsvcid": "4420", 00:06:08.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:08.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:08.894 "hdgst": false, 00:06:08.894 "ddgst": false 00:06:08.894 }, 00:06:08.894 "method": "bdev_nvme_attach_controller" 00:06:08.894 }' 00:06:08.894 [2024-11-20 18:43:30.987136] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:06:08.894 [2024-11-20 18:43:30.987185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485653 ] 00:06:08.894 [2024-11-20 18:43:31.062801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.894 [2024-11-20 18:43:31.103778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.152 Running I/O for 10 seconds... 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:09.152 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:09.153 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:09.153 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.153 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.153 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.411 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=91 00:06:09.411 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 91 -ge 100 ']' 00:06:09.411 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.671 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.671 [2024-11-20 18:43:31.793450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.671 [2024-11-20 18:43:31.793490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.671 [2024-11-20 18:43:31.793505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.671 [2024-11-20 18:43:31.793513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.671 [2024-11-20 18:43:31.793521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.671 [2024-11-20 18:43:31.793529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.671 [2024-11-20 18:43:31.793537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.793988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.793995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.794002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.794009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.794015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.794023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.794029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.794039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.794046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.794054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.794060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.672 [2024-11-20 18:43:31.794068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.672 [2024-11-20 18:43:31.794074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:09.673 [2024-11-20 18:43:31.794433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:09.673 [2024-11-20 18:43:31.794441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e79560 is same with the state(6) to be set 00:06:09.673 [2024-11-20 18:43:31.795397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:09.673 task offset: 105088 on job bdev=Nvme0n1 fails 00:06:09.673 00:06:09.673 Latency(us) 00:06:09.673 [2024-11-20T17:43:31.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:09.673 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:09.673 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:09.673 Verification LBA range: start 0x0 length 0x400 00:06:09.673 Nvme0n1 : 0.41 1888.54 118.03 157.38 0.00 30457.88 1474.56 26963.38 00:06:09.673 [2024-11-20T17:43:31.998Z] =================================================================================================================== 00:06:09.673 [2024-11-20T17:43:31.998Z] Total : 1888.54 118.03 157.38 0.00 30457.88 1474.56 26963.38 00:06:09.673 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.673 [2024-11-20 18:43:31.797840] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.673 [2024-11-20 18:43:31.797869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60500 (9): Bad file descriptor 00:06:09.673 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:09.673 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.673 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.673 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.673 18:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:09.673 [2024-11-20 18:43:31.931382] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3485653 00:06:10.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3485653) - No such process 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:10.606 { 00:06:10.606 "params": { 00:06:10.606 "name": "Nvme$subsystem", 00:06:10.606 "trtype": "$TEST_TRANSPORT", 00:06:10.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:10.606 "adrfam": "ipv4", 00:06:10.606 "trsvcid": "$NVMF_PORT", 00:06:10.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:10.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:10.606 "hdgst": ${hdgst:-false}, 00:06:10.606 "ddgst": ${ddgst:-false} 00:06:10.606 }, 00:06:10.606 "method": "bdev_nvme_attach_controller" 00:06:10.606 } 00:06:10.606 EOF 00:06:10.606 )") 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:10.606 18:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:10.606 "params": { 00:06:10.606 "name": "Nvme0", 00:06:10.606 "trtype": "tcp", 00:06:10.606 "traddr": "10.0.0.2", 00:06:10.606 "adrfam": "ipv4", 00:06:10.606 "trsvcid": "4420", 00:06:10.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:10.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:10.606 "hdgst": false, 00:06:10.606 "ddgst": false 00:06:10.606 }, 00:06:10.606 "method": "bdev_nvme_attach_controller" 00:06:10.606 }' 00:06:10.606 [2024-11-20 18:43:32.859019] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:06:10.607 [2024-11-20 18:43:32.859069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485961 ] 00:06:10.865 [2024-11-20 18:43:32.932615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.865 [2024-11-20 18:43:32.971190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.123 Running I/O for 1 seconds... 00:06:12.055 1999.00 IOPS, 124.94 MiB/s 00:06:12.055 Latency(us) 00:06:12.055 [2024-11-20T17:43:34.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:12.055 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:12.055 Verification LBA range: start 0x0 length 0x400 00:06:12.055 Nvme0n1 : 1.01 2043.64 127.73 0.00 0.00 30615.78 2964.72 26963.38 00:06:12.055 [2024-11-20T17:43:34.380Z] =================================================================================================================== 00:06:12.055 [2024-11-20T17:43:34.381Z] Total : 2043.64 127.73 0.00 0.00 30615.78 2964.72 26963.38 00:06:12.313 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:12.314 rmmod nvme_tcp 00:06:12.314 rmmod nvme_fabrics 00:06:12.314 rmmod nvme_keyring 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3485449 ']' 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3485449 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3485449 ']' 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3485449 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3485449 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3485449' 00:06:12.314 killing process with pid 3485449 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3485449 00:06:12.314 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3485449 00:06:12.572 [2024-11-20 18:43:34.717913] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:12.572 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:12.572 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:12.572 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:12.572 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.573 18:43:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:15.109 00:06:15.109 real 0m12.590s 00:06:15.109 user 0m20.410s 00:06:15.109 sys 0m5.622s 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:15.109 ************************************ 00:06:15.109 END TEST nvmf_host_management 00:06:15.109 ************************************ 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.109 ************************************ 00:06:15.109 START TEST nvmf_lvol 00:06:15.109 ************************************ 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:15.109 * Looking for test storage... 00:06:15.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.109 18:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.109 --rc genhtml_branch_coverage=1 00:06:15.109 --rc genhtml_function_coverage=1 00:06:15.109 --rc genhtml_legend=1 00:06:15.109 --rc geninfo_all_blocks=1 00:06:15.109 --rc geninfo_unexecuted_blocks=1 00:06:15.109 00:06:15.109 ' 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.109 --rc genhtml_branch_coverage=1 00:06:15.109 --rc genhtml_function_coverage=1 00:06:15.109 --rc genhtml_legend=1 00:06:15.109 --rc geninfo_all_blocks=1 00:06:15.109 --rc geninfo_unexecuted_blocks=1 00:06:15.109 00:06:15.109 ' 00:06:15.109 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.109 --rc genhtml_branch_coverage=1 00:06:15.109 --rc genhtml_function_coverage=1 00:06:15.109 --rc genhtml_legend=1 00:06:15.110 --rc geninfo_all_blocks=1 00:06:15.110 --rc geninfo_unexecuted_blocks=1 00:06:15.110 00:06:15.110 ' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.110 --rc genhtml_branch_coverage=1 00:06:15.110 --rc genhtml_function_coverage=1 00:06:15.110 --rc genhtml_legend=1 00:06:15.110 --rc geninfo_all_blocks=1 00:06:15.110 --rc geninfo_unexecuted_blocks=1 00:06:15.110 00:06:15.110 ' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.110 18:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:21.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:21.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:21.680 Found net devices under 0000:86:00.0: cvl_0_0 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:21.680 Found net devices under 0000:86:00.1: cvl_0_1 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:21.680 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:21.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:06:21.680 00:06:21.680 --- 10.0.0.2 ping statistics --- 00:06:21.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.680 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:06:21.680 00:06:21.680 --- 10.0.0.1 ping statistics --- 00:06:21.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.680 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3489753 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3489753 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3489753 ']' 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:21.680 [2024-11-20 18:43:43.180952] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:06:21.680 [2024-11-20 18:43:43.180997] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.680 [2024-11-20 18:43:43.257968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.680 [2024-11-20 18:43:43.299579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.680 [2024-11-20 18:43:43.299616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.680 [2024-11-20 18:43:43.299623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.680 [2024-11-20 18:43:43.299628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.680 [2024-11-20 18:43:43.299633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.680 [2024-11-20 18:43:43.300952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.680 [2024-11-20 18:43:43.301060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.680 [2024-11-20 18:43:43.301062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:21.680 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.681 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:21.681 [2024-11-20 18:43:43.615104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.681 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:21.681 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:21.681 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:21.939 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:21.939 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:22.197 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:22.197 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5f6e6eec-71ab-4c53-b1ca-3470ddefd323 00:06:22.197 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f6e6eec-71ab-4c53-b1ca-3470ddefd323 lvol 20 00:06:22.456 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5c55f7ce-cc9c-442b-945d-ec75deebc633 00:06:22.456 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:22.713 18:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c55f7ce-cc9c-442b-945d-ec75deebc633 00:06:22.971 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:22.971 [2024-11-20 18:43:45.256529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.971 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:23.229 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:23.229 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3490241 00:06:23.229 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:24.213 18:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5c55f7ce-cc9c-442b-945d-ec75deebc633 MY_SNAPSHOT 00:06:24.485 18:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1ea13dd3-3517-4d7e-87cd-fe2863d7d6b6 00:06:24.485 18:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5c55f7ce-cc9c-442b-945d-ec75deebc633 30 00:06:24.770 18:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1ea13dd3-3517-4d7e-87cd-fe2863d7d6b6 MY_CLONE 00:06:25.042 18:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3582bdf4-b6de-46f6-92d8-d2d838b4f15e 00:06:25.042 18:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3582bdf4-b6de-46f6-92d8-d2d838b4f15e 00:06:25.607 18:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3490241 00:06:33.700 Initializing NVMe Controllers 00:06:33.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:33.700 Controller IO queue size 128, less than required. 00:06:33.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:33.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:33.700 Initialization complete. Launching workers. 00:06:33.700 ======================================================== 00:06:33.700 Latency(us) 00:06:33.700 Device Information : IOPS MiB/s Average min max 00:06:33.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12283.67 47.98 10421.34 445.67 72407.67 00:06:33.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12215.07 47.72 10476.45 3256.98 56823.75 00:06:33.700 ======================================================== 00:06:33.700 Total : 24498.73 95.70 10448.82 445.67 72407.67 00:06:33.700 00:06:33.701 18:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:33.959 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c55f7ce-cc9c-442b-945d-ec75deebc633 00:06:33.959 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f6e6eec-71ab-4c53-b1ca-3470ddefd323 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:34.217 rmmod nvme_tcp 00:06:34.217 rmmod nvme_fabrics 00:06:34.217 rmmod nvme_keyring 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3489753 ']' 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3489753 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3489753 ']' 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3489753 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:34.217 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3489753 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3489753' 00:06:34.476 killing process with pid 3489753 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3489753 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3489753 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.476 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.014 00:06:37.014 real 0m21.965s 00:06:37.014 user 1m3.021s 00:06:37.014 sys 0m7.608s 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:37.014 ************************************ 00:06:37.014 END TEST nvmf_lvol 00:06:37.014 ************************************ 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.014 ************************************ 00:06:37.014 START TEST nvmf_lvs_grow 00:06:37.014 ************************************ 00:06:37.014 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:37.014 * Looking for test storage... 00:06:37.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.014 --rc genhtml_branch_coverage=1 00:06:37.014 --rc genhtml_function_coverage=1 00:06:37.014 --rc genhtml_legend=1 00:06:37.014 --rc geninfo_all_blocks=1 00:06:37.014 --rc geninfo_unexecuted_blocks=1 00:06:37.014 00:06:37.014 ' 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.014 --rc genhtml_branch_coverage=1 00:06:37.014 --rc genhtml_function_coverage=1 00:06:37.014 --rc genhtml_legend=1 00:06:37.014 --rc geninfo_all_blocks=1 00:06:37.014 --rc geninfo_unexecuted_blocks=1 00:06:37.014 00:06:37.014 ' 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.014 --rc genhtml_branch_coverage=1 00:06:37.014 --rc genhtml_function_coverage=1 00:06:37.014 --rc genhtml_legend=1 00:06:37.014 --rc geninfo_all_blocks=1 00:06:37.014 --rc geninfo_unexecuted_blocks=1 00:06:37.014 00:06:37.014 ' 00:06:37.014 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.014 --rc genhtml_branch_coverage=1 00:06:37.014 --rc genhtml_function_coverage=1 00:06:37.014 --rc genhtml_legend=1 00:06:37.014 --rc geninfo_all_blocks=1 00:06:37.014 --rc geninfo_unexecuted_blocks=1 00:06:37.014 00:06:37.014 ' 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.015 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:43.585 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:43.586 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:43.586 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:43.586 Found net devices under 0000:86:00.0: cvl_0_0 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:43.586 Found net devices under 0000:86:00.1: cvl_0_1 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.586 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:43.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:06:43.586 00:06:43.586 --- 10.0.0.2 ping statistics --- 00:06:43.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.586 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:06:43.586 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:06:43.586 00:06:43.586 --- 10.0.0.1 ping statistics --- 00:06:43.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.586 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3495640 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3495640 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3495640 ']' 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:43.587 [2024-11-20 18:44:05.260156] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:06:43.587 [2024-11-20 18:44:05.260212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.587 [2024-11-20 18:44:05.338871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.587 [2024-11-20 18:44:05.378554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.587 [2024-11-20 18:44:05.378589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.587 [2024-11-20 18:44:05.378599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.587 [2024-11-20 18:44:05.378605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.587 [2024-11-20 18:44:05.378610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.587 [2024-11-20 18:44:05.379148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:43.587 [2024-11-20 18:44:05.700453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:43.587 ************************************ 00:06:43.587 START TEST lvs_grow_clean 00:06:43.587 ************************************ 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:43.587 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:43.846 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:43.846 18:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:43.847 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:43.847 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:43.847 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:44.105 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:44.105 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:44.105 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 lvol 150 00:06:44.364 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=477f02da-4e49-4f42-b87b-97c885e334c9 00:06:44.364 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:44.364 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:44.364 [2024-11-20 18:44:06.674044] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:44.364 [2024-11-20 18:44:06.674093] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:44.364 true 00:06:44.622 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:44.622 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:44.623 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:44.623 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:44.882 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 477f02da-4e49-4f42-b87b-97c885e334c9 00:06:45.140 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:45.140 [2024-11-20 18:44:07.420291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.140 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3496139 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3496139 /var/tmp/bdevperf.sock 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3496139 ']' 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:45.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.398 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:45.398 [2024-11-20 18:44:07.644950] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:06:45.398 [2024-11-20 18:44:07.644995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496139 ] 00:06:45.398 [2024-11-20 18:44:07.718869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.655 [2024-11-20 18:44:07.761656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.655 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.655 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:45.655 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:46.222 Nvme0n1 00:06:46.222 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:46.222 [ 00:06:46.222 { 00:06:46.222 "name": "Nvme0n1", 00:06:46.222 "aliases": [ 00:06:46.222 "477f02da-4e49-4f42-b87b-97c885e334c9" 00:06:46.222 ], 00:06:46.222 "product_name": "NVMe disk", 00:06:46.222 "block_size": 4096, 00:06:46.222 "num_blocks": 38912, 00:06:46.222 "uuid": "477f02da-4e49-4f42-b87b-97c885e334c9", 00:06:46.222 "numa_id": 1, 00:06:46.222 "assigned_rate_limits": { 00:06:46.222 "rw_ios_per_sec": 0, 00:06:46.222 "rw_mbytes_per_sec": 0, 00:06:46.222 "r_mbytes_per_sec": 0, 00:06:46.222 "w_mbytes_per_sec": 0 00:06:46.222 }, 00:06:46.222 "claimed": false, 00:06:46.222 "zoned": false, 00:06:46.222 "supported_io_types": { 00:06:46.222 "read": true, 00:06:46.222 "write": true, 00:06:46.222 "unmap": true, 00:06:46.222 "flush": true, 00:06:46.222 "reset": true, 00:06:46.222 "nvme_admin": true, 00:06:46.222 "nvme_io": true, 00:06:46.222 "nvme_io_md": false, 00:06:46.222 "write_zeroes": true, 00:06:46.222 "zcopy": false, 00:06:46.222 "get_zone_info": false, 00:06:46.222 "zone_management": false, 00:06:46.222 "zone_append": false, 00:06:46.222 "compare": true, 00:06:46.222 "compare_and_write": true, 00:06:46.222 "abort": true, 00:06:46.222 "seek_hole": false, 00:06:46.222 "seek_data": false, 00:06:46.222 "copy": true, 00:06:46.222 "nvme_iov_md": false 00:06:46.222 }, 00:06:46.222 "memory_domains": [ 00:06:46.222 { 00:06:46.222 "dma_device_id": "system", 00:06:46.223 "dma_device_type": 1 00:06:46.223 } 00:06:46.223 ], 00:06:46.223 "driver_specific": { 00:06:46.223 "nvme": [ 00:06:46.223 { 00:06:46.223 "trid": { 00:06:46.223 "trtype": "TCP", 00:06:46.223 "adrfam": "IPv4", 00:06:46.223 "traddr": "10.0.0.2", 00:06:46.223 "trsvcid": "4420", 00:06:46.223 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:46.223 }, 00:06:46.223 "ctrlr_data": { 00:06:46.223 "cntlid": 1, 00:06:46.223 "vendor_id": "0x8086", 00:06:46.223 "model_number": "SPDK bdev Controller", 00:06:46.223 "serial_number": "SPDK0", 00:06:46.223 "firmware_revision": "25.01", 00:06:46.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:46.223 "oacs": { 00:06:46.223 "security": 0, 00:06:46.223 "format": 0, 00:06:46.223 "firmware": 0, 00:06:46.223 "ns_manage": 0 00:06:46.223 }, 00:06:46.223 "multi_ctrlr": true, 00:06:46.223 "ana_reporting": false 00:06:46.223 }, 00:06:46.223 "vs": { 00:06:46.223 "nvme_version": "1.3" 00:06:46.223 }, 00:06:46.223 "ns_data": { 00:06:46.223 "id": 1, 00:06:46.223 "can_share": true 00:06:46.223 } 00:06:46.223 } 00:06:46.223 ], 00:06:46.223 "mp_policy": "active_passive" 00:06:46.223 } 00:06:46.223 } 00:06:46.223 ] 00:06:46.223 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:46.223 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3496322 00:06:46.223 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:46.223 Running I/O for 10 seconds... 00:06:47.596 Latency(us) 00:06:47.596 [2024-11-20T17:44:09.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.596 Nvme0n1 : 1.00 23389.00 91.36 0.00 0.00 0.00 0.00 0.00 00:06:47.596 [2024-11-20T17:44:09.921Z] =================================================================================================================== 00:06:47.596 [2024-11-20T17:44:09.921Z] Total : 23389.00 91.36 0.00 0.00 0.00 0.00 0.00 00:06:47.596 00:06:48.162 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:48.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.421 Nvme0n1 : 2.00 23382.00 91.34 0.00 0.00 0.00 0.00 0.00 00:06:48.421 [2024-11-20T17:44:10.746Z] =================================================================================================================== 00:06:48.421 [2024-11-20T17:44:10.746Z] Total : 23382.00 91.34 0.00 0.00 0.00 0.00 0.00 00:06:48.421 00:06:48.421 true 00:06:48.421 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:48.421 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:48.677 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:48.677 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:48.677 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3496322 00:06:49.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.242 Nvme0n1 : 3.00 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:06:49.242 [2024-11-20T17:44:11.567Z] =================================================================================================================== 00:06:49.242 [2024-11-20T17:44:11.567Z] Total : 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:06:49.242 00:06:50.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.616 Nvme0n1 : 4.00 23513.25 91.85 0.00 0.00 0.00 0.00 0.00 00:06:50.616 [2024-11-20T17:44:12.941Z] =================================================================================================================== 00:06:50.616 [2024-11-20T17:44:12.941Z] Total : 23513.25 91.85 0.00 0.00 0.00 0.00 0.00 00:06:50.616 00:06:51.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.550 Nvme0n1 : 5.00 23571.80 92.08 0.00 0.00 0.00 0.00 0.00 00:06:51.550 [2024-11-20T17:44:13.875Z] =================================================================================================================== 00:06:51.550 [2024-11-20T17:44:13.875Z] Total : 23571.80 92.08 0.00 0.00 0.00 0.00 0.00 00:06:51.550 00:06:52.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.485 Nvme0n1 : 6.00 23615.67 92.25 0.00 0.00 0.00 0.00 0.00 00:06:52.485 [2024-11-20T17:44:14.810Z] =================================================================================================================== 00:06:52.485 [2024-11-20T17:44:14.810Z] Total : 23615.67 92.25 0.00 0.00 0.00 0.00 0.00 00:06:52.485 00:06:53.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.419 Nvme0n1 : 7.00 23664.14 92.44 0.00 0.00 0.00 0.00 0.00 00:06:53.419 [2024-11-20T17:44:15.744Z] =================================================================================================================== 00:06:53.419 [2024-11-20T17:44:15.744Z] Total : 23664.14 92.44 0.00 0.00 0.00 0.00 0.00 00:06:53.419 00:06:54.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.355 Nvme0n1 : 8.00 23691.25 92.54 0.00 0.00 0.00 0.00 0.00 00:06:54.355 [2024-11-20T17:44:16.680Z] =================================================================================================================== 00:06:54.355 [2024-11-20T17:44:16.680Z] Total : 23691.25 92.54 0.00 0.00 0.00 0.00 0.00 00:06:54.355 00:06:55.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.289 Nvme0n1 : 9.00 23714.00 92.63 0.00 0.00 0.00 0.00 0.00 00:06:55.289 [2024-11-20T17:44:17.614Z] =================================================================================================================== 00:06:55.289 [2024-11-20T17:44:17.614Z] Total : 23714.00 92.63 0.00 0.00 0.00 0.00 0.00 00:06:55.289 00:06:56.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.664 Nvme0n1 : 10.00 23736.80 92.72 0.00 0.00 0.00 0.00 0.00 00:06:56.664 [2024-11-20T17:44:18.989Z] =================================================================================================================== 00:06:56.664 [2024-11-20T17:44:18.989Z] Total : 23736.80 92.72 0.00 0.00 0.00 0.00 0.00 00:06:56.664 00:06:56.664 00:06:56.664 Latency(us) 00:06:56.664 [2024-11-20T17:44:18.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.664 Nvme0n1 : 10.00 23735.90 92.72 0.00 0.00 5389.24 3183.18 13918.60 00:06:56.664 [2024-11-20T17:44:18.989Z] =================================================================================================================== 00:06:56.664 [2024-11-20T17:44:18.989Z] Total : 23735.90 92.72 0.00 0.00 5389.24 3183.18 13918.60 00:06:56.664 { 00:06:56.664 "results": [ 00:06:56.664 { 00:06:56.664 "job": "Nvme0n1", 00:06:56.664 "core_mask": "0x2", 00:06:56.664 "workload": "randwrite", 00:06:56.664 "status": "finished", 00:06:56.664 "queue_depth": 128, 00:06:56.664 "io_size": 4096, 00:06:56.664 "runtime": 10.003116, 00:06:56.664 "iops": 23735.903892347145, 00:06:56.664 "mibps": 92.71837457948104, 00:06:56.664 "io_failed": 0, 00:06:56.664 "io_timeout": 0, 00:06:56.664 "avg_latency_us": 5389.24192343785, 00:06:56.664 "min_latency_us": 3183.177142857143, 00:06:56.664 "max_latency_us": 13918.598095238096 00:06:56.664 } 00:06:56.664 ], 00:06:56.664 "core_count": 1 00:06:56.664 } 00:06:56.664 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3496139 00:06:56.664 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3496139 ']' 00:06:56.664 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3496139 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3496139 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3496139' 00:06:56.665 killing process with pid 3496139 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3496139 00:06:56.665 Received shutdown signal, test time was about 10.000000 seconds 00:06:56.665 00:06:56.665 Latency(us) 00:06:56.665 [2024-11-20T17:44:18.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.665 [2024-11-20T17:44:18.990Z] =================================================================================================================== 00:06:56.665 [2024-11-20T17:44:18.990Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3496139 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.665 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.924 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:56.924 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:57.183 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:57.183 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:57.183 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:57.442 [2024-11-20 18:44:19.563021] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:57.442 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:57.701 request: 00:06:57.701 { 00:06:57.701 "uuid": "11fbe6c5-96a3-4a03-a400-e03166bff4f1", 00:06:57.701 "method": "bdev_lvol_get_lvstores", 00:06:57.701 "req_id": 1 00:06:57.701 } 00:06:57.701 Got JSON-RPC error response 00:06:57.701 response: 00:06:57.701 { 00:06:57.701 "code": -19, 00:06:57.701 "message": "No such device" 00:06:57.701 } 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:57.701 aio_bdev 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 477f02da-4e49-4f42-b87b-97c885e334c9 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=477f02da-4e49-4f42-b87b-97c885e334c9 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.701 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:57.960 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 477f02da-4e49-4f42-b87b-97c885e334c9 -t 2000 00:06:58.218 [ 00:06:58.218 { 00:06:58.218 "name": "477f02da-4e49-4f42-b87b-97c885e334c9", 00:06:58.219 "aliases": [ 00:06:58.219 "lvs/lvol" 00:06:58.219 ], 00:06:58.219 "product_name": "Logical Volume", 00:06:58.219 "block_size": 4096, 00:06:58.219 "num_blocks": 38912, 00:06:58.219 "uuid": "477f02da-4e49-4f42-b87b-97c885e334c9", 00:06:58.219 "assigned_rate_limits": { 00:06:58.219 "rw_ios_per_sec": 0, 00:06:58.219 "rw_mbytes_per_sec": 0, 00:06:58.219 "r_mbytes_per_sec": 0, 00:06:58.219 "w_mbytes_per_sec": 0 00:06:58.219 }, 00:06:58.219 "claimed": false, 00:06:58.219 "zoned": false, 00:06:58.219 "supported_io_types": { 00:06:58.219 "read": true, 00:06:58.219 "write": true, 00:06:58.219 "unmap": true, 00:06:58.219 "flush": false, 00:06:58.219 "reset": true, 00:06:58.219 "nvme_admin": false, 00:06:58.219 "nvme_io": false, 00:06:58.219 "nvme_io_md": false, 00:06:58.219 "write_zeroes": true, 00:06:58.219 "zcopy": false, 00:06:58.219 "get_zone_info": false, 00:06:58.219 "zone_management": false, 00:06:58.219 "zone_append": false, 00:06:58.219 "compare": false, 00:06:58.219 "compare_and_write": false, 00:06:58.219 "abort": false, 00:06:58.219 "seek_hole": true, 00:06:58.219 "seek_data": true, 00:06:58.219 "copy": false, 00:06:58.219 "nvme_iov_md": false 00:06:58.219 }, 00:06:58.219 "driver_specific": { 00:06:58.219 "lvol": { 00:06:58.219 "lvol_store_uuid": "11fbe6c5-96a3-4a03-a400-e03166bff4f1", 00:06:58.219 "base_bdev": "aio_bdev", 00:06:58.219 "thin_provision": false, 00:06:58.219 "num_allocated_clusters": 38, 00:06:58.219 "snapshot": false, 00:06:58.219 "clone": false, 00:06:58.219 "esnap_clone": false 00:06:58.219 } 00:06:58.219 } 00:06:58.219 } 00:06:58.219 ] 00:06:58.219 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:58.219 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:58.219 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:58.219 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:58.219 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:58.219 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:58.478 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:58.478 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 477f02da-4e49-4f42-b87b-97c885e334c9 00:06:58.738 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11fbe6c5-96a3-4a03-a400-e03166bff4f1 00:06:58.997 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:58.997 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:59.256 00:06:59.256 real 0m15.580s 00:06:59.256 user 0m15.158s 00:06:59.256 sys 0m1.476s 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:59.256 ************************************ 00:06:59.256 END TEST lvs_grow_clean 00:06:59.256 ************************************ 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:59.256 ************************************ 00:06:59.256 START TEST lvs_grow_dirty 00:06:59.256 ************************************ 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:59.256 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:59.515 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:59.515 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:59.515 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5a25b2d-e096-4452-a779-7623f94a610a 00:06:59.515 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:06:59.515 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:59.774 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:59.774 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:59.774 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5a25b2d-e096-4452-a779-7623f94a610a lvol 150 00:07:00.033 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ee6ebed2-4fba-42a7-8a60-c2aa2184667c 00:07:00.033 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:00.033 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:00.292 [2024-11-20 18:44:22.359065] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:00.292 [2024-11-20 18:44:22.359115] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:00.292 true 00:07:00.292 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:00.292 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:00.292 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:00.292 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:00.550 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ee6ebed2-4fba-42a7-8a60-c2aa2184667c 00:07:00.809 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:00.809 [2024-11-20 18:44:23.077190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.809 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3498740 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3498740 /var/tmp/bdevperf.sock 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3498740 ']' 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:01.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.116 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:01.116 [2024-11-20 18:44:23.310575] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:01.116 [2024-11-20 18:44:23.310619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498740 ] 00:07:01.116 [2024-11-20 18:44:23.383940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.116 [2024-11-20 18:44:23.424026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.379 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.379 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:01.379 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:01.637 Nvme0n1 00:07:01.638 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:01.897 [ 00:07:01.897 { 00:07:01.897 "name": "Nvme0n1", 00:07:01.897 "aliases": [ 00:07:01.897 "ee6ebed2-4fba-42a7-8a60-c2aa2184667c" 00:07:01.897 ], 00:07:01.897 "product_name": "NVMe disk", 00:07:01.897 "block_size": 4096, 00:07:01.897 "num_blocks": 38912, 00:07:01.897 "uuid": "ee6ebed2-4fba-42a7-8a60-c2aa2184667c", 00:07:01.897 "numa_id": 1, 00:07:01.897 "assigned_rate_limits": { 00:07:01.897 "rw_ios_per_sec": 0, 00:07:01.897 "rw_mbytes_per_sec": 0, 00:07:01.897 "r_mbytes_per_sec": 0, 00:07:01.897 "w_mbytes_per_sec": 0 00:07:01.897 }, 00:07:01.897 "claimed": false, 00:07:01.897 "zoned": false, 00:07:01.897 "supported_io_types": { 00:07:01.897 "read": true, 00:07:01.897 "write": true, 00:07:01.897 "unmap": true, 00:07:01.897 "flush": true, 00:07:01.897 "reset": true, 00:07:01.897 "nvme_admin": true, 00:07:01.897 "nvme_io": true, 00:07:01.897 "nvme_io_md": false, 00:07:01.897 "write_zeroes": true, 00:07:01.897 "zcopy": false, 00:07:01.897 "get_zone_info": false, 00:07:01.897 "zone_management": false, 00:07:01.897 "zone_append": false, 00:07:01.897 "compare": true, 00:07:01.897 "compare_and_write": true, 00:07:01.897 "abort": true, 00:07:01.897 "seek_hole": false, 00:07:01.897 "seek_data": false, 00:07:01.897 "copy": true, 00:07:01.897 "nvme_iov_md": false 00:07:01.897 }, 00:07:01.897 "memory_domains": [ 00:07:01.897 { 00:07:01.897 "dma_device_id": "system", 00:07:01.897 "dma_device_type": 1 00:07:01.897 } 00:07:01.897 ], 00:07:01.897 "driver_specific": { 00:07:01.897 "nvme": [ 00:07:01.897 { 00:07:01.897 "trid": { 00:07:01.897 "trtype": "TCP", 00:07:01.897 "adrfam": "IPv4", 00:07:01.897 "traddr": "10.0.0.2", 00:07:01.897 "trsvcid": "4420", 00:07:01.897 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:01.897 }, 00:07:01.897 "ctrlr_data": { 00:07:01.897 "cntlid": 1, 00:07:01.897 "vendor_id": "0x8086", 00:07:01.897 "model_number": "SPDK bdev Controller", 00:07:01.897 "serial_number": "SPDK0", 00:07:01.897 "firmware_revision": "25.01", 00:07:01.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.897 "oacs": { 00:07:01.897 "security": 0, 00:07:01.897 "format": 0, 00:07:01.897 "firmware": 0, 00:07:01.897 "ns_manage": 0 00:07:01.897 }, 00:07:01.897 "multi_ctrlr": true, 00:07:01.897 "ana_reporting": false 00:07:01.897 }, 00:07:01.897 "vs": { 00:07:01.897 "nvme_version": "1.3" 00:07:01.897 }, 00:07:01.897 "ns_data": { 00:07:01.897 "id": 1, 00:07:01.897 "can_share": true 00:07:01.897 } 00:07:01.897 } 00:07:01.897 ], 00:07:01.897 "mp_policy": "active_passive" 00:07:01.897 } 00:07:01.897 } 00:07:01.897 ] 00:07:01.897 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:01.897 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3498970 00:07:01.897 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:01.897 Running I/O for 10 seconds... 00:07:03.276 Latency(us) 00:07:03.276 [2024-11-20T17:44:25.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.276 Nvme0n1 : 1.00 23440.00 91.56 0.00 0.00 0.00 0.00 0.00 00:07:03.276 [2024-11-20T17:44:25.601Z] =================================================================================================================== 00:07:03.276 [2024-11-20T17:44:25.601Z] Total : 23440.00 91.56 0.00 0.00 0.00 0.00 0.00 00:07:03.276 00:07:03.843 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:04.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.102 Nvme0n1 : 2.00 23504.00 91.81 0.00 0.00 0.00 0.00 0.00 00:07:04.102 [2024-11-20T17:44:26.427Z] =================================================================================================================== 00:07:04.102 [2024-11-20T17:44:26.427Z] Total : 23504.00 91.81 0.00 0.00 0.00 0.00 0.00 00:07:04.102 00:07:04.102 true 00:07:04.102 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:04.102 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:04.360 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:04.360 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:04.360 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3498970 00:07:04.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.929 Nvme0n1 : 3.00 23551.33 92.00 0.00 0.00 0.00 0.00 0.00 00:07:04.929 [2024-11-20T17:44:27.254Z] =================================================================================================================== 00:07:04.929 [2024-11-20T17:44:27.254Z] Total : 23551.33 92.00 0.00 0.00 0.00 0.00 0.00 00:07:04.929 00:07:05.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.866 Nvme0n1 : 4.00 23594.00 92.16 0.00 0.00 0.00 0.00 0.00 00:07:05.866 [2024-11-20T17:44:28.191Z] =================================================================================================================== 00:07:05.866 [2024-11-20T17:44:28.191Z] Total : 23594.00 92.16 0.00 0.00 0.00 0.00 0.00 00:07:05.866 00:07:07.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.242 Nvme0n1 : 5.00 23643.40 92.36 0.00 0.00 0.00 0.00 0.00 00:07:07.242 [2024-11-20T17:44:29.567Z] =================================================================================================================== 00:07:07.242 [2024-11-20T17:44:29.567Z] Total : 23643.40 92.36 0.00 0.00 0.00 0.00 0.00 00:07:07.242 00:07:08.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.178 Nvme0n1 : 6.00 23648.67 92.38 0.00 0.00 0.00 0.00 0.00 00:07:08.178 [2024-11-20T17:44:30.503Z] =================================================================================================================== 00:07:08.178 [2024-11-20T17:44:30.503Z] Total : 23648.67 92.38 0.00 0.00 0.00 0.00 0.00 00:07:08.178 00:07:09.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.116 Nvme0n1 : 7.00 23634.86 92.32 0.00 0.00 0.00 0.00 0.00 00:07:09.116 [2024-11-20T17:44:31.441Z] =================================================================================================================== 00:07:09.116 [2024-11-20T17:44:31.441Z] Total : 23634.86 92.32 0.00 0.00 0.00 0.00 0.00 00:07:09.116 00:07:10.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.052 Nvme0n1 : 8.00 23659.62 92.42 0.00 0.00 0.00 0.00 0.00 00:07:10.052 [2024-11-20T17:44:32.377Z] =================================================================================================================== 00:07:10.052 [2024-11-20T17:44:32.377Z] Total : 23659.62 92.42 0.00 0.00 0.00 0.00 0.00 00:07:10.052 00:07:10.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.989 Nvme0n1 : 9.00 23679.22 92.50 0.00 0.00 0.00 0.00 0.00 00:07:10.989 [2024-11-20T17:44:33.314Z] =================================================================================================================== 00:07:10.989 [2024-11-20T17:44:33.314Z] Total : 23679.22 92.50 0.00 0.00 0.00 0.00 0.00 00:07:10.989 00:07:11.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.926 Nvme0n1 : 10.00 23705.50 92.60 0.00 0.00 0.00 0.00 0.00 00:07:11.926 [2024-11-20T17:44:34.251Z] =================================================================================================================== 00:07:11.926 [2024-11-20T17:44:34.251Z] Total : 23705.50 92.60 0.00 0.00 0.00 0.00 0.00 00:07:11.926 00:07:11.926 00:07:11.926 Latency(us) 00:07:11.926 [2024-11-20T17:44:34.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.926 Nvme0n1 : 10.01 23704.49 92.60 0.00 0.00 5396.98 2449.80 11734.06 00:07:11.926 [2024-11-20T17:44:34.251Z] =================================================================================================================== 00:07:11.926 [2024-11-20T17:44:34.251Z] Total : 23704.49 92.60 0.00 0.00 5396.98 2449.80 11734.06 00:07:11.926 { 00:07:11.926 "results": [ 00:07:11.926 { 00:07:11.926 "job": "Nvme0n1", 00:07:11.926 "core_mask": "0x2", 00:07:11.926 "workload": "randwrite", 00:07:11.926 "status": "finished", 00:07:11.926 "queue_depth": 128, 00:07:11.926 "io_size": 4096, 00:07:11.926 "runtime": 10.005824, 00:07:11.926 "iops": 23704.4945024018, 00:07:11.926 "mibps": 92.59568165000704, 00:07:11.926 "io_failed": 0, 00:07:11.926 "io_timeout": 0, 00:07:11.926 "avg_latency_us": 5396.976886940624, 00:07:11.926 "min_latency_us": 2449.7980952380954, 00:07:11.926 "max_latency_us": 11734.064761904761 00:07:11.926 } 00:07:11.926 ], 00:07:11.926 "core_count": 1 00:07:11.926 } 00:07:11.926 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3498740 00:07:11.926 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3498740 ']' 00:07:11.926 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3498740 00:07:11.926 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:11.926 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.926 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3498740 00:07:12.185 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:12.185 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:12.185 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3498740' 00:07:12.185 killing process with pid 3498740 00:07:12.185 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3498740 00:07:12.185 Received shutdown signal, test time was about 10.000000 seconds 00:07:12.185 00:07:12.185 Latency(us) 00:07:12.185 [2024-11-20T17:44:34.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.185 [2024-11-20T17:44:34.510Z] =================================================================================================================== 00:07:12.185 [2024-11-20T17:44:34.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:12.185 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3498740 00:07:12.185 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.443 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:12.701 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:12.701 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3495640 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3495640 00:07:12.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3495640 Killed "${NVMF_APP[@]}" "$@" 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3500823 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3500823 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3500823 ']' 00:07:12.960 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.961 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.961 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.961 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.961 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.961 [2024-11-20 18:44:35.158995] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:12.961 [2024-11-20 18:44:35.159040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.961 [2024-11-20 18:44:35.238701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.961 [2024-11-20 18:44:35.278919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.961 [2024-11-20 18:44:35.278954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.961 [2024-11-20 18:44:35.278961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.961 [2024-11-20 18:44:35.278966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.961 [2024-11-20 18:44:35.278972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.961 [2024-11-20 18:44:35.279527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.226 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.226 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:13.226 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.226 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.226 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.226 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.226 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:13.500 [2024-11-20 18:44:35.581658] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:13.500 [2024-11-20 18:44:35.581739] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:13.500 [2024-11-20 18:44:35.581765] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ee6ebed2-4fba-42a7-8a60-c2aa2184667c 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ee6ebed2-4fba-42a7-8a60-c2aa2184667c 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:13.500 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ee6ebed2-4fba-42a7-8a60-c2aa2184667c -t 2000 00:07:13.764 [ 00:07:13.764 { 00:07:13.764 "name": "ee6ebed2-4fba-42a7-8a60-c2aa2184667c", 00:07:13.764 "aliases": [ 00:07:13.764 "lvs/lvol" 00:07:13.764 ], 00:07:13.764 "product_name": "Logical Volume", 00:07:13.764 "block_size": 4096, 00:07:13.764 "num_blocks": 38912, 00:07:13.764 "uuid": "ee6ebed2-4fba-42a7-8a60-c2aa2184667c", 00:07:13.764 "assigned_rate_limits": { 00:07:13.764 "rw_ios_per_sec": 0, 00:07:13.764 "rw_mbytes_per_sec": 0, 00:07:13.764 "r_mbytes_per_sec": 0, 00:07:13.764 "w_mbytes_per_sec": 0 00:07:13.764 }, 00:07:13.764 "claimed": false, 00:07:13.764 "zoned": false, 00:07:13.764 "supported_io_types": { 00:07:13.764 "read": true, 00:07:13.764 "write": true, 00:07:13.764 "unmap": true, 00:07:13.764 "flush": false, 00:07:13.764 "reset": true, 00:07:13.764 "nvme_admin": false, 00:07:13.764 "nvme_io": false, 00:07:13.764 "nvme_io_md": false, 00:07:13.764 "write_zeroes": true, 00:07:13.764 "zcopy": false, 00:07:13.764 "get_zone_info": false, 00:07:13.764 "zone_management": false, 00:07:13.764 "zone_append": false, 00:07:13.764 "compare": false, 00:07:13.764 "compare_and_write": false, 00:07:13.764 "abort": false, 00:07:13.764 "seek_hole": true, 00:07:13.764 "seek_data": true, 00:07:13.764 "copy": false, 00:07:13.764 "nvme_iov_md": false 00:07:13.764 }, 00:07:13.764 "driver_specific": { 00:07:13.764 "lvol": { 00:07:13.764 "lvol_store_uuid": "a5a25b2d-e096-4452-a779-7623f94a610a", 00:07:13.764 "base_bdev": "aio_bdev", 00:07:13.764 "thin_provision": false, 00:07:13.764 "num_allocated_clusters": 38, 00:07:13.764 "snapshot": false, 00:07:13.764 "clone": false, 00:07:13.764 "esnap_clone": false 00:07:13.764 } 00:07:13.764 } 00:07:13.764 } 00:07:13.764 ] 00:07:13.764 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:13.764 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:13.764 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:14.022 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:14.023 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:14.023 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:14.023 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:14.023 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:14.282 [2024-11-20 18:44:36.506406] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.282 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:14.541 request: 00:07:14.541 { 00:07:14.541 "uuid": "a5a25b2d-e096-4452-a779-7623f94a610a", 00:07:14.541 "method": "bdev_lvol_get_lvstores", 00:07:14.541 "req_id": 1 00:07:14.541 } 00:07:14.541 Got JSON-RPC error response 00:07:14.541 response: 00:07:14.541 { 00:07:14.541 "code": -19, 00:07:14.541 "message": "No such device" 00:07:14.541 } 00:07:14.541 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:14.541 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.541 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.541 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.541 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:14.798 aio_bdev 00:07:14.799 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ee6ebed2-4fba-42a7-8a60-c2aa2184667c 00:07:14.799 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ee6ebed2-4fba-42a7-8a60-c2aa2184667c 00:07:14.799 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.799 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:14.799 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.799 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.799 18:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:14.799 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ee6ebed2-4fba-42a7-8a60-c2aa2184667c -t 2000 00:07:15.056 [ 00:07:15.057 { 00:07:15.057 "name": "ee6ebed2-4fba-42a7-8a60-c2aa2184667c", 00:07:15.057 "aliases": [ 00:07:15.057 "lvs/lvol" 00:07:15.057 ], 00:07:15.057 "product_name": "Logical Volume", 00:07:15.057 "block_size": 4096, 00:07:15.057 "num_blocks": 38912, 00:07:15.057 "uuid": "ee6ebed2-4fba-42a7-8a60-c2aa2184667c", 00:07:15.057 "assigned_rate_limits": { 00:07:15.057 "rw_ios_per_sec": 0, 00:07:15.057 "rw_mbytes_per_sec": 0, 00:07:15.057 "r_mbytes_per_sec": 0, 00:07:15.057 "w_mbytes_per_sec": 0 00:07:15.057 }, 00:07:15.057 "claimed": false, 00:07:15.057 "zoned": false, 00:07:15.057 "supported_io_types": { 00:07:15.057 "read": true, 00:07:15.057 "write": true, 00:07:15.057 "unmap": true, 00:07:15.057 "flush": false, 00:07:15.057 "reset": true, 00:07:15.057 "nvme_admin": false, 00:07:15.057 "nvme_io": false, 00:07:15.057 "nvme_io_md": false, 00:07:15.057 "write_zeroes": true, 00:07:15.057 "zcopy": false, 00:07:15.057 "get_zone_info": false, 00:07:15.057 "zone_management": false, 00:07:15.057 "zone_append": false, 00:07:15.057 "compare": false, 00:07:15.057 "compare_and_write": false, 00:07:15.057 "abort": false, 00:07:15.057 "seek_hole": true, 00:07:15.057 "seek_data": true, 00:07:15.057 "copy": false, 00:07:15.057 "nvme_iov_md": false 00:07:15.057 }, 00:07:15.057 "driver_specific": { 00:07:15.057 "lvol": { 00:07:15.057 "lvol_store_uuid": "a5a25b2d-e096-4452-a779-7623f94a610a", 00:07:15.057 "base_bdev": "aio_bdev", 00:07:15.057 "thin_provision": false, 00:07:15.057 "num_allocated_clusters": 38, 00:07:15.057 "snapshot": false, 00:07:15.057 "clone": false, 00:07:15.057 "esnap_clone": false 00:07:15.057 } 00:07:15.057 } 00:07:15.057 } 00:07:15.057 ] 00:07:15.057 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:15.057 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:15.057 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:15.314 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:15.314 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:15.314 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:15.572 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:15.572 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ee6ebed2-4fba-42a7-8a60-c2aa2184667c 00:07:15.572 18:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5a25b2d-e096-4452-a779-7623f94a610a 00:07:15.830 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.089 00:07:16.089 real 0m16.860s 00:07:16.089 user 0m43.614s 00:07:16.089 sys 0m3.796s 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:16.089 ************************************ 00:07:16.089 END TEST lvs_grow_dirty 00:07:16.089 ************************************ 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:16.089 nvmf_trace.0 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.089 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.089 rmmod nvme_tcp 00:07:16.089 rmmod nvme_fabrics 00:07:16.089 rmmod nvme_keyring 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3500823 ']' 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3500823 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3500823 ']' 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3500823 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3500823 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3500823' 00:07:16.349 killing process with pid 3500823 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3500823 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3500823 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.349 18:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.887 00:07:18.887 real 0m41.775s 00:07:18.887 user 1m4.327s 00:07:18.887 sys 0m10.300s 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.887 ************************************ 00:07:18.887 END TEST nvmf_lvs_grow 00:07:18.887 ************************************ 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.887 ************************************ 00:07:18.887 START TEST nvmf_bdev_io_wait 00:07:18.887 ************************************ 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:18.887 * Looking for test storage... 00:07:18.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.887 --rc genhtml_branch_coverage=1 00:07:18.887 --rc genhtml_function_coverage=1 00:07:18.887 --rc genhtml_legend=1 00:07:18.887 --rc geninfo_all_blocks=1 00:07:18.887 --rc geninfo_unexecuted_blocks=1 00:07:18.887 00:07:18.887 ' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.887 --rc genhtml_branch_coverage=1 00:07:18.887 --rc genhtml_function_coverage=1 00:07:18.887 --rc genhtml_legend=1 00:07:18.887 --rc geninfo_all_blocks=1 00:07:18.887 --rc geninfo_unexecuted_blocks=1 00:07:18.887 00:07:18.887 ' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.887 --rc genhtml_branch_coverage=1 00:07:18.887 --rc genhtml_function_coverage=1 00:07:18.887 --rc genhtml_legend=1 00:07:18.887 --rc geninfo_all_blocks=1 00:07:18.887 --rc geninfo_unexecuted_blocks=1 00:07:18.887 00:07:18.887 ' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.887 --rc genhtml_branch_coverage=1 00:07:18.887 --rc genhtml_function_coverage=1 00:07:18.887 --rc genhtml_legend=1 00:07:18.887 --rc geninfo_all_blocks=1 00:07:18.887 --rc geninfo_unexecuted_blocks=1 00:07:18.887 00:07:18.887 ' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.887 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.888 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.888 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:25.459 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:25.459 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:25.459 Found net devices under 0000:86:00.0: cvl_0_0 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:25.459 Found net devices under 0000:86:00.1: cvl_0_1 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.459 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:07:25.460 00:07:25.460 --- 10.0.0.2 ping statistics --- 00:07:25.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.460 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:07:25.460 00:07:25.460 --- 10.0.0.1 ping statistics --- 00:07:25.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.460 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.460 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3504913 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3504913 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3504913 ']' 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 [2024-11-20 18:44:47.055779] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:25.460 [2024-11-20 18:44:47.055825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.460 [2024-11-20 18:44:47.138509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.460 [2024-11-20 18:44:47.179662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.460 [2024-11-20 18:44:47.179703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.460 [2024-11-20 18:44:47.179712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.460 [2024-11-20 18:44:47.179718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.460 [2024-11-20 18:44:47.179722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.460 [2024-11-20 18:44:47.181301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.460 [2024-11-20 18:44:47.181412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.460 [2024-11-20 18:44:47.181514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.460 [2024-11-20 18:44:47.181516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 [2024-11-20 18:44:47.325735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 Malloc0 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 [2024-11-20 18:44:47.373120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3505133 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3505135 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.460 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.460 { 00:07:25.460 "params": { 00:07:25.460 "name": "Nvme$subsystem", 00:07:25.460 "trtype": "$TEST_TRANSPORT", 00:07:25.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.460 "adrfam": "ipv4", 00:07:25.460 "trsvcid": "$NVMF_PORT", 00:07:25.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.460 "hdgst": ${hdgst:-false}, 00:07:25.461 "ddgst": ${ddgst:-false} 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 } 00:07:25.461 EOF 00:07:25.461 )") 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3505137 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.461 { 00:07:25.461 "params": { 00:07:25.461 "name": "Nvme$subsystem", 00:07:25.461 "trtype": "$TEST_TRANSPORT", 00:07:25.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.461 "adrfam": "ipv4", 00:07:25.461 "trsvcid": "$NVMF_PORT", 00:07:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.461 "hdgst": ${hdgst:-false}, 00:07:25.461 "ddgst": ${ddgst:-false} 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 } 00:07:25.461 EOF 00:07:25.461 )") 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3505140 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.461 { 00:07:25.461 "params": { 00:07:25.461 "name": "Nvme$subsystem", 00:07:25.461 "trtype": "$TEST_TRANSPORT", 00:07:25.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.461 "adrfam": "ipv4", 00:07:25.461 "trsvcid": "$NVMF_PORT", 00:07:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.461 "hdgst": ${hdgst:-false}, 00:07:25.461 "ddgst": ${ddgst:-false} 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 } 00:07:25.461 EOF 00:07:25.461 )") 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.461 { 00:07:25.461 "params": { 00:07:25.461 "name": "Nvme$subsystem", 00:07:25.461 "trtype": "$TEST_TRANSPORT", 00:07:25.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.461 "adrfam": "ipv4", 00:07:25.461 "trsvcid": "$NVMF_PORT", 00:07:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.461 "hdgst": ${hdgst:-false}, 00:07:25.461 "ddgst": ${ddgst:-false} 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 } 00:07:25.461 EOF 00:07:25.461 )") 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3505133 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.461 "params": { 00:07:25.461 "name": "Nvme1", 00:07:25.461 "trtype": "tcp", 00:07:25.461 "traddr": "10.0.0.2", 00:07:25.461 "adrfam": "ipv4", 00:07:25.461 "trsvcid": "4420", 00:07:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.461 "hdgst": false, 00:07:25.461 "ddgst": false 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 }' 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.461 "params": { 00:07:25.461 "name": "Nvme1", 00:07:25.461 "trtype": "tcp", 00:07:25.461 "traddr": "10.0.0.2", 00:07:25.461 "adrfam": "ipv4", 00:07:25.461 "trsvcid": "4420", 00:07:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.461 "hdgst": false, 00:07:25.461 "ddgst": false 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 }' 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.461 "params": { 00:07:25.461 "name": "Nvme1", 00:07:25.461 "trtype": "tcp", 00:07:25.461 "traddr": "10.0.0.2", 00:07:25.461 "adrfam": "ipv4", 00:07:25.461 "trsvcid": "4420", 00:07:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.461 "hdgst": false, 00:07:25.461 "ddgst": false 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 }' 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.461 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.461 "params": { 00:07:25.461 "name": "Nvme1", 00:07:25.461 "trtype": "tcp", 00:07:25.461 "traddr": "10.0.0.2", 00:07:25.461 "adrfam": "ipv4", 00:07:25.461 "trsvcid": "4420", 00:07:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.461 "hdgst": false, 00:07:25.461 "ddgst": false 00:07:25.461 }, 00:07:25.461 "method": "bdev_nvme_attach_controller" 00:07:25.461 }' 00:07:25.461 [2024-11-20 18:44:47.426926] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:25.461 [2024-11-20 18:44:47.426927] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:25.461 [2024-11-20 18:44:47.426924] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:25.461 [2024-11-20 18:44:47.426979] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 18:44:47.426980] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-11-20 18:44:47.426980] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:25.461 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:25.461 --proc-type=auto ] 00:07:25.461 [2024-11-20 18:44:47.429227] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:25.461 [2024-11-20 18:44:47.429276] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:25.461 [2024-11-20 18:44:47.622828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.461 [2024-11-20 18:44:47.665544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:25.461 [2024-11-20 18:44:47.714910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.461 [2024-11-20 18:44:47.757218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.719 [2024-11-20 18:44:47.815317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.719 [2024-11-20 18:44:47.858388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.719 [2024-11-20 18:44:47.872308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:25.719 [2024-11-20 18:44:47.901165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:25.976 Running I/O for 1 seconds... 00:07:25.976 Running I/O for 1 seconds... 00:07:25.976 Running I/O for 1 seconds... 00:07:25.976 Running I/O for 1 seconds... 00:07:26.907 7956.00 IOPS, 31.08 MiB/s 00:07:26.907 Latency(us) 00:07:26.907 [2024-11-20T17:44:49.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.907 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:26.907 Nvme1n1 : 1.02 7942.04 31.02 0.00 0.00 15944.43 5492.54 23592.96 00:07:26.907 [2024-11-20T17:44:49.232Z] =================================================================================================================== 00:07:26.907 [2024-11-20T17:44:49.232Z] Total : 7942.04 31.02 0.00 0.00 15944.43 5492.54 23592.96 00:07:26.907 10702.00 IOPS, 41.80 MiB/s 00:07:26.907 Latency(us) 00:07:26.907 [2024-11-20T17:44:49.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.907 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:26.907 Nvme1n1 : 1.01 10749.44 41.99 0.00 0.00 11859.28 6553.60 22094.99 00:07:26.907 [2024-11-20T17:44:49.232Z] =================================================================================================================== 00:07:26.907 [2024-11-20T17:44:49.232Z] Total : 10749.44 41.99 0.00 0.00 11859.28 6553.60 22094.99 00:07:26.907 7962.00 IOPS, 31.10 MiB/s 00:07:26.907 Latency(us) 00:07:26.907 [2024-11-20T17:44:49.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.907 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:26.907 Nvme1n1 : 1.00 8077.18 31.55 0.00 0.00 15819.20 2278.16 36949.82 00:07:26.907 [2024-11-20T17:44:49.232Z] =================================================================================================================== 00:07:26.907 [2024-11-20T17:44:49.232Z] Total : 8077.18 31.55 0.00 0.00 15819.20 2278.16 36949.82 00:07:26.907 244800.00 IOPS, 956.25 MiB/s 00:07:26.907 Latency(us) 00:07:26.907 [2024-11-20T17:44:49.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.907 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:26.907 Nvme1n1 : 1.00 244427.43 954.79 0.00 0.00 520.93 234.06 1521.37 00:07:26.907 [2024-11-20T17:44:49.232Z] =================================================================================================================== 00:07:26.907 [2024-11-20T17:44:49.232Z] Total : 244427.43 954.79 0.00 0.00 520.93 234.06 1521.37 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3505135 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3505137 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3505140 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:27.164 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.165 rmmod nvme_tcp 00:07:27.165 rmmod nvme_fabrics 00:07:27.165 rmmod nvme_keyring 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3504913 ']' 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3504913 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3504913 ']' 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3504913 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504913 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504913' 00:07:27.165 killing process with pid 3504913 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3504913 00:07:27.165 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3504913 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.423 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.960 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.960 00:07:29.960 real 0m10.875s 00:07:29.960 user 0m16.826s 00:07:29.960 sys 0m6.120s 00:07:29.960 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.960 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:29.960 ************************************ 00:07:29.960 END TEST nvmf_bdev_io_wait 00:07:29.960 ************************************ 00:07:29.960 18:44:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:29.960 18:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.960 18:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.960 18:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.960 ************************************ 00:07:29.960 START TEST nvmf_queue_depth 00:07:29.961 ************************************ 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:29.961 * Looking for test storage... 00:07:29.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.961 --rc genhtml_branch_coverage=1 00:07:29.961 --rc genhtml_function_coverage=1 00:07:29.961 --rc genhtml_legend=1 00:07:29.961 --rc geninfo_all_blocks=1 00:07:29.961 --rc geninfo_unexecuted_blocks=1 00:07:29.961 00:07:29.961 ' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.961 --rc genhtml_branch_coverage=1 00:07:29.961 --rc genhtml_function_coverage=1 00:07:29.961 --rc genhtml_legend=1 00:07:29.961 --rc geninfo_all_blocks=1 00:07:29.961 --rc geninfo_unexecuted_blocks=1 00:07:29.961 00:07:29.961 ' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.961 --rc genhtml_branch_coverage=1 00:07:29.961 --rc genhtml_function_coverage=1 00:07:29.961 --rc genhtml_legend=1 00:07:29.961 --rc geninfo_all_blocks=1 00:07:29.961 --rc geninfo_unexecuted_blocks=1 00:07:29.961 00:07:29.961 ' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.961 --rc genhtml_branch_coverage=1 00:07:29.961 --rc genhtml_function_coverage=1 00:07:29.961 --rc genhtml_legend=1 00:07:29.961 --rc geninfo_all_blocks=1 00:07:29.961 --rc geninfo_unexecuted_blocks=1 00:07:29.961 00:07:29.961 ' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.961 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.962 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:36.534 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.534 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:36.535 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:36.535 Found net devices under 0000:86:00.0: cvl_0_0 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:36.535 Found net devices under 0000:86:00.1: cvl_0_1 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:36.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:07:36.535 00:07:36.535 --- 10.0.0.2 ping statistics --- 00:07:36.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.535 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:07:36.535 00:07:36.535 --- 10.0.0.1 ping statistics --- 00:07:36.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.535 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3508927 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3508927 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3508927 ']' 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.535 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 [2024-11-20 18:44:58.041805] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:36.535 [2024-11-20 18:44:58.041854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.535 [2024-11-20 18:44:58.127520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.535 [2024-11-20 18:44:58.167761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.535 [2024-11-20 18:44:58.167796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.535 [2024-11-20 18:44:58.167803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.535 [2024-11-20 18:44:58.167809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.535 [2024-11-20 18:44:58.167814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.535 [2024-11-20 18:44:58.168378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 [2024-11-20 18:44:58.304942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 Malloc0 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 [2024-11-20 18:44:58.355161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3509047 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3509047 /var/tmp/bdevperf.sock 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3509047 ']' 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.535 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.535 [2024-11-20 18:44:58.405382] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:07:36.535 [2024-11-20 18:44:58.405424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509047 ] 00:07:36.536 [2024-11-20 18:44:58.479233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.536 [2024-11-20 18:44:58.521306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.536 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.536 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:36.536 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:36.536 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.536 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.536 NVMe0n1 00:07:36.536 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.536 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:36.536 Running I/O for 10 seconds... 00:07:38.849 11315.00 IOPS, 44.20 MiB/s [2024-11-20T17:45:02.103Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-20T17:45:03.036Z] 11943.67 IOPS, 46.65 MiB/s [2024-11-20T17:45:03.970Z] 12017.00 IOPS, 46.94 MiB/s [2024-11-20T17:45:04.904Z] 12071.20 IOPS, 47.15 MiB/s [2024-11-20T17:45:05.836Z] 12109.83 IOPS, 47.30 MiB/s [2024-11-20T17:45:07.209Z] 12124.14 IOPS, 47.36 MiB/s [2024-11-20T17:45:08.142Z] 12137.25 IOPS, 47.41 MiB/s [2024-11-20T17:45:09.077Z] 12152.33 IOPS, 47.47 MiB/s [2024-11-20T17:45:09.077Z] 12168.30 IOPS, 47.53 MiB/s 00:07:46.752 Latency(us) 00:07:46.752 [2024-11-20T17:45:09.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.752 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:46.752 Verification LBA range: start 0x0 length 0x4000 00:07:46.752 NVMe0n1 : 10.05 12211.75 47.70 0.00 0.00 83594.45 8238.81 53177.78 00:07:46.752 [2024-11-20T17:45:09.077Z] =================================================================================================================== 00:07:46.752 [2024-11-20T17:45:09.077Z] Total : 12211.75 47.70 0.00 0.00 83594.45 8238.81 53177.78 00:07:46.752 { 00:07:46.752 "results": [ 00:07:46.752 { 00:07:46.752 "job": "NVMe0n1", 00:07:46.752 "core_mask": "0x1", 00:07:46.752 "workload": "verify", 00:07:46.752 "status": "finished", 00:07:46.752 "verify_range": { 00:07:46.752 "start": 0, 00:07:46.752 "length": 16384 00:07:46.752 }, 00:07:46.752 "queue_depth": 1024, 00:07:46.752 "io_size": 4096, 00:07:46.752 "runtime": 10.04827, 00:07:46.752 "iops": 12211.753864097998, 00:07:46.752 "mibps": 47.702163531632806, 00:07:46.752 "io_failed": 0, 00:07:46.752 "io_timeout": 0, 00:07:46.752 "avg_latency_us": 83594.45495143483, 00:07:46.752 "min_latency_us": 8238.81142857143, 00:07:46.752 "max_latency_us": 53177.782857142854 00:07:46.752 } 00:07:46.752 ], 00:07:46.752 "core_count": 1 00:07:46.752 } 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3509047 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3509047 ']' 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3509047 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3509047 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3509047' 00:07:46.752 killing process with pid 3509047 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3509047 00:07:46.752 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.752 00:07:46.752 Latency(us) 00:07:46.752 [2024-11-20T17:45:09.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.752 [2024-11-20T17:45:09.077Z] =================================================================================================================== 00:07:46.752 [2024-11-20T17:45:09.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.752 18:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3509047 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.013 rmmod nvme_tcp 00:07:47.013 rmmod nvme_fabrics 00:07:47.013 rmmod nvme_keyring 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3508927 ']' 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3508927 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3508927 ']' 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3508927 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3508927 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3508927' 00:07:47.013 killing process with pid 3508927 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3508927 00:07:47.013 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3508927 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.272 18:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.178 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:49.438 00:07:49.438 real 0m19.768s 00:07:49.438 user 0m22.917s 00:07:49.438 sys 0m6.190s 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:49.438 ************************************ 00:07:49.438 END TEST nvmf_queue_depth 00:07:49.438 ************************************ 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.438 ************************************ 00:07:49.438 START TEST nvmf_target_multipath 00:07:49.438 ************************************ 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:49.438 * Looking for test storage... 00:07:49.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:49.438 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.439 --rc genhtml_branch_coverage=1 00:07:49.439 --rc genhtml_function_coverage=1 00:07:49.439 --rc genhtml_legend=1 00:07:49.439 --rc geninfo_all_blocks=1 00:07:49.439 --rc geninfo_unexecuted_blocks=1 00:07:49.439 00:07:49.439 ' 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.439 --rc genhtml_branch_coverage=1 00:07:49.439 --rc genhtml_function_coverage=1 00:07:49.439 --rc genhtml_legend=1 00:07:49.439 --rc geninfo_all_blocks=1 00:07:49.439 --rc geninfo_unexecuted_blocks=1 00:07:49.439 00:07:49.439 ' 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.439 --rc genhtml_branch_coverage=1 00:07:49.439 --rc genhtml_function_coverage=1 00:07:49.439 --rc genhtml_legend=1 00:07:49.439 --rc geninfo_all_blocks=1 00:07:49.439 --rc geninfo_unexecuted_blocks=1 00:07:49.439 00:07:49.439 ' 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.439 --rc genhtml_branch_coverage=1 00:07:49.439 --rc genhtml_function_coverage=1 00:07:49.439 --rc genhtml_legend=1 00:07:49.439 --rc geninfo_all_blocks=1 00:07:49.439 --rc geninfo_unexecuted_blocks=1 00:07:49.439 00:07:49.439 ' 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.439 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.699 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:56.276 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.276 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.276 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.276 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.276 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.276 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.276 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:56.277 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:56.277 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:56.277 Found net devices under 0000:86:00.0: cvl_0_0 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:56.277 Found net devices under 0000:86:00.1: cvl_0_1 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.277 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:07:56.278 00:07:56.278 --- 10.0.0.2 ping statistics --- 00:07:56.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.278 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:07:56.278 00:07:56.278 --- 10.0.0.1 ping statistics --- 00:07:56.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.278 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:56.278 only one NIC for nvmf test 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.278 rmmod nvme_tcp 00:07:56.278 rmmod nvme_fabrics 00:07:56.278 rmmod nvme_keyring 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.278 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.657 00:07:57.657 real 0m8.397s 00:07:57.657 user 0m1.929s 00:07:57.657 sys 0m4.475s 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.657 18:45:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:57.657 ************************************ 00:07:57.657 END TEST nvmf_target_multipath 00:07:57.657 ************************************ 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.917 ************************************ 00:07:57.917 START TEST nvmf_zcopy 00:07:57.917 ************************************ 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:57.917 * Looking for test storage... 00:07:57.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.917 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.918 --rc genhtml_branch_coverage=1 00:07:57.918 --rc genhtml_function_coverage=1 00:07:57.918 --rc genhtml_legend=1 00:07:57.918 --rc geninfo_all_blocks=1 00:07:57.918 --rc geninfo_unexecuted_blocks=1 00:07:57.918 00:07:57.918 ' 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.918 --rc genhtml_branch_coverage=1 00:07:57.918 --rc genhtml_function_coverage=1 00:07:57.918 --rc genhtml_legend=1 00:07:57.918 --rc geninfo_all_blocks=1 00:07:57.918 --rc geninfo_unexecuted_blocks=1 00:07:57.918 00:07:57.918 ' 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.918 --rc genhtml_branch_coverage=1 00:07:57.918 --rc genhtml_function_coverage=1 00:07:57.918 --rc genhtml_legend=1 00:07:57.918 --rc geninfo_all_blocks=1 00:07:57.918 --rc geninfo_unexecuted_blocks=1 00:07:57.918 00:07:57.918 ' 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.918 --rc genhtml_branch_coverage=1 00:07:57.918 --rc genhtml_function_coverage=1 00:07:57.918 --rc genhtml_legend=1 00:07:57.918 --rc geninfo_all_blocks=1 00:07:57.918 --rc geninfo_unexecuted_blocks=1 00:07:57.918 00:07:57.918 ' 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.918 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.177 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.177 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.177 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.178 18:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:04.748 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:04.748 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:04.748 Found net devices under 0000:86:00.0: cvl_0_0 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:04.748 Found net devices under 0000:86:00.1: cvl_0_1 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.748 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.749 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.749 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:08:04.749 00:08:04.749 --- 10.0.0.2 ping statistics --- 00:08:04.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.749 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:08:04.749 00:08:04.749 --- 10.0.0.1 ping statistics --- 00:08:04.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.749 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3518493 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3518493 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3518493 ']' 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 [2024-11-20 18:45:26.362155] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:08:04.749 [2024-11-20 18:45:26.362213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.749 [2024-11-20 18:45:26.442828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.749 [2024-11-20 18:45:26.483035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.749 [2024-11-20 18:45:26.483066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.749 [2024-11-20 18:45:26.483073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.749 [2024-11-20 18:45:26.483079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.749 [2024-11-20 18:45:26.483084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.749 [2024-11-20 18:45:26.483632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 [2024-11-20 18:45:26.631257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 [2024-11-20 18:45:26.651451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 malloc0 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.749 { 00:08:04.749 "params": { 00:08:04.749 "name": "Nvme$subsystem", 00:08:04.749 "trtype": "$TEST_TRANSPORT", 00:08:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.749 "adrfam": "ipv4", 00:08:04.749 "trsvcid": "$NVMF_PORT", 00:08:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.749 "hdgst": ${hdgst:-false}, 00:08:04.749 "ddgst": ${ddgst:-false} 00:08:04.749 }, 00:08:04.749 "method": "bdev_nvme_attach_controller" 00:08:04.749 } 00:08:04.749 EOF 00:08:04.749 )") 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:04.749 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.749 "params": { 00:08:04.749 "name": "Nvme1", 00:08:04.749 "trtype": "tcp", 00:08:04.749 "traddr": "10.0.0.2", 00:08:04.749 "adrfam": "ipv4", 00:08:04.749 "trsvcid": "4420", 00:08:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.750 "hdgst": false, 00:08:04.750 "ddgst": false 00:08:04.750 }, 00:08:04.750 "method": "bdev_nvme_attach_controller" 00:08:04.750 }' 00:08:04.750 [2024-11-20 18:45:26.731403] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:08:04.750 [2024-11-20 18:45:26.731443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518600 ] 00:08:04.750 [2024-11-20 18:45:26.805354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.750 [2024-11-20 18:45:26.845803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.008 Running I/O for 10 seconds... 00:08:06.876 8739.00 IOPS, 68.27 MiB/s [2024-11-20T17:45:30.575Z] 8787.00 IOPS, 68.65 MiB/s [2024-11-20T17:45:31.510Z] 8779.00 IOPS, 68.59 MiB/s [2024-11-20T17:45:32.445Z] 8762.25 IOPS, 68.46 MiB/s [2024-11-20T17:45:33.379Z] 8774.20 IOPS, 68.55 MiB/s [2024-11-20T17:45:34.313Z] 8785.17 IOPS, 68.63 MiB/s [2024-11-20T17:45:35.321Z] 8788.29 IOPS, 68.66 MiB/s [2024-11-20T17:45:36.337Z] 8796.38 IOPS, 68.72 MiB/s [2024-11-20T17:45:37.272Z] 8798.11 IOPS, 68.74 MiB/s [2024-11-20T17:45:37.272Z] 8805.00 IOPS, 68.79 MiB/s 00:08:14.947 Latency(us) 00:08:14.947 [2024-11-20T17:45:37.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.947 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:14.947 Verification LBA range: start 0x0 length 0x1000 00:08:14.947 Nvme1n1 : 10.01 8808.84 68.82 0.00 0.00 14489.59 1825.65 21720.50 00:08:14.947 [2024-11-20T17:45:37.272Z] =================================================================================================================== 00:08:14.947 [2024-11-20T17:45:37.272Z] Total : 8808.84 68.82 0.00 0.00 14489.59 1825.65 21720.50 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3520320 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.205 { 00:08:15.205 "params": { 00:08:15.205 "name": "Nvme$subsystem", 00:08:15.205 "trtype": "$TEST_TRANSPORT", 00:08:15.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.205 "adrfam": "ipv4", 00:08:15.205 "trsvcid": "$NVMF_PORT", 00:08:15.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.205 "hdgst": ${hdgst:-false}, 00:08:15.205 "ddgst": ${ddgst:-false} 00:08:15.205 }, 00:08:15.205 "method": "bdev_nvme_attach_controller" 00:08:15.205 } 00:08:15.205 EOF 00:08:15.205 )") 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:15.205 [2024-11-20 18:45:37.373612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.205 [2024-11-20 18:45:37.373654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:15.205 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.205 "params": { 00:08:15.206 "name": "Nvme1", 00:08:15.206 "trtype": "tcp", 00:08:15.206 "traddr": "10.0.0.2", 00:08:15.206 "adrfam": "ipv4", 00:08:15.206 "trsvcid": "4420", 00:08:15.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.206 "hdgst": false, 00:08:15.206 "ddgst": false 00:08:15.206 }, 00:08:15.206 "method": "bdev_nvme_attach_controller" 00:08:15.206 }' 00:08:15.206 [2024-11-20 18:45:37.385602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.385615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.397636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.397650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.409663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.409674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.410792] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:08:15.206 [2024-11-20 18:45:37.410833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520320 ] 00:08:15.206 [2024-11-20 18:45:37.421696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.421706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.433725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.433735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.445759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.445769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.457791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.457800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.469823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.469833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.481854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.481863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.485280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.206 [2024-11-20 18:45:37.493886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.493898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.505916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.505929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.206 [2024-11-20 18:45:37.517950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.206 [2024-11-20 18:45:37.517960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.529039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.464 [2024-11-20 18:45:37.530003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.530023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.542031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.542049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.554054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.554069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.566085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.566100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.578113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.578125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.590146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.590160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.602178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.602190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.614228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.614246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.626254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.626270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.464 [2024-11-20 18:45:37.638280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.464 [2024-11-20 18:45:37.638293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.650317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.650330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.662349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.662361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.674377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.674387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.686408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.686417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.698448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.698461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.710475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.710484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.722514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.722523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.734547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.734557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.746583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.746595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.758635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.758653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 Running I/O for 5 seconds... 00:08:15.465 [2024-11-20 18:45:37.770660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.770670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.465 [2024-11-20 18:45:37.785525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.465 [2024-11-20 18:45:37.785546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.799711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.799732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.813535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.813557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.827106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.827125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.841150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.841169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.854999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.855017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.869126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.869148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.883172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.883196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.897363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.897382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.910831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.910851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.924762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.924781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.938635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.938653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.952905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.952924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.964015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.964032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.978565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.978583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:37.992709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.723 [2024-11-20 18:45:37.992729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.723 [2024-11-20 18:45:38.006457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.724 [2024-11-20 18:45:38.006477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.724 [2024-11-20 18:45:38.020247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.724 [2024-11-20 18:45:38.020265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.724 [2024-11-20 18:45:38.031067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.724 [2024-11-20 18:45:38.031084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.724 [2024-11-20 18:45:38.045137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.724 [2024-11-20 18:45:38.045157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.058873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.058892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.072684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.072703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.086643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.086661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.100705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.100723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.114566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.114584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.128226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.128244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.141995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.142018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.155651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.155669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.169531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.169548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.183279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.183296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.197153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.197171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.211172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.211190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.224733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.224752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.238426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.238445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.252374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.252392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.265767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.265785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.279286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.279304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.983 [2024-11-20 18:45:38.293060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.983 [2024-11-20 18:45:38.293078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.306738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.306758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.320258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.320277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.334341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.334360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.348288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.348307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.361970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.361988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.375856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.375875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.389454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.389472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.403362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.403385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.417208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.417226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.431361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.431379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.442677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.442695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.456993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.457011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.470365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.470383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.484330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.484348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-11-20 18:45:38.498164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-11-20 18:45:38.498182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-11-20 18:45:38.511676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-11-20 18:45:38.511694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-11-20 18:45:38.525918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-11-20 18:45:38.525937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-11-20 18:45:38.537131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-11-20 18:45:38.537149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-11-20 18:45:38.551815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-11-20 18:45:38.551833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-11-20 18:45:38.561563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-11-20 18:45:38.561583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.575772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.575793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.589496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.589517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.603420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.603441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.617505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.617525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.631427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.631446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.644968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.644988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.659240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.659266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.672865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.672885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.686679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.686699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.696329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.696348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.710185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.710209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.723868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.723887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.737680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.737704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.751805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.751824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-11-20 18:45:38.765330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-11-20 18:45:38.765349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 16763.00 IOPS, 130.96 MiB/s [2024-11-20T17:45:38.827Z] [2024-11-20 18:45:38.779149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-11-20 18:45:38.779168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-11-20 18:45:38.792914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-11-20 18:45:38.792933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-11-20 18:45:38.806826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-11-20 18:45:38.806847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-11-20 18:45:38.820514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-11-20 18:45:38.820534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.758 [2024-11-20 18:45:38.835043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.758 [2024-11-20 18:45:38.835064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.758 [2024-11-20 18:45:38.845837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.758 [2024-11-20 18:45:38.845855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.758 [2024-11-20 18:45:38.860797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.758 [2024-11-20 18:45:38.860815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.758 [2024-11-20 18:45:38.876484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.758 [2024-11-20 18:45:38.876503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.758 [2024-11-20 18:45:38.890903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.758 [2024-11-20 18:45:38.890922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.905217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.905236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.916344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.916362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.930423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.930442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.943758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.943776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.957838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.957858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.971509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.971528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.985709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.985727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:38.999052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:38.999071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:39.013183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:39.013208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:39.026751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:39.026769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:39.040489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:39.040507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:39.054419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:39.054437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-11-20 18:45:39.068381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-11-20 18:45:39.068399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.082088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.082108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.095873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.095893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.109463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.109482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.123527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.123546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.136940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.136959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.151220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.151244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.162119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.162136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.176290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.176309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.189890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.189909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.204062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.204080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.218467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.218485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.017 [2024-11-20 18:45:39.232054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.017 [2024-11-20 18:45:39.232072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-11-20 18:45:39.245969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-11-20 18:45:39.245987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-11-20 18:45:39.259574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-11-20 18:45:39.259592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-11-20 18:45:39.273427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-11-20 18:45:39.273446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-11-20 18:45:39.287368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-11-20 18:45:39.287386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-11-20 18:45:39.300866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-11-20 18:45:39.300884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-11-20 18:45:39.314356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-11-20 18:45:39.314374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-11-20 18:45:39.328375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-11-20 18:45:39.328393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.342330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.342349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.356514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.356533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.370174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.370192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.383901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.383919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.397628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.397646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.411544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.411563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.425558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.425576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.436246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.436263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.450127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.450145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.463251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.463269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.476999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.477017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.490903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.490921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.504897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.504915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.516142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.516159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.530395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.530413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.543890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.543908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.553294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.553311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.567081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.567099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.580717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.580735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.276 [2024-11-20 18:45:39.594886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.276 [2024-11-20 18:45:39.594907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.610290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.610310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.624712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.624730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.639561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.639580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.654029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.654048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.667125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.667143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.680886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.680909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.694473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.694491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.708453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.708471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.722312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.722331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.735939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.735957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.749929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.749947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.763506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.763523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 16836.50 IOPS, 131.54 MiB/s [2024-11-20T17:45:39.860Z] [2024-11-20 18:45:39.777240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.777259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.790952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.790970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.804667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.804685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.818744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.818762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.832534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.832553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.535 [2024-11-20 18:45:39.846606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.535 [2024-11-20 18:45:39.846624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.860821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.860841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.871096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.871114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.885014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.885033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.898924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.898942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.912970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.912988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.926893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.926912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.940362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.940385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.954390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.954410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.968376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.968397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.981906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.981925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:39.995746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:39.995766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.009715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.009734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.023124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.023144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.037534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.037554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.045076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.045095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.058742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.058761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.067759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.067778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.082231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.082249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.096512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.096532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.105372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.105391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 [2024-11-20 18:45:40.114078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-11-20 18:45:40.114097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.122647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.122667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.131961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.131980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.146466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.146486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.160096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.160117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.174317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.174341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.188130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.188150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.201919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.201938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.215964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.215983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.229823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.229842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.244039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.244059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.258572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.258590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.273886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.273906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.288144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.288162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.302599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.302616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.318216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.318235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.332349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.332368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.345764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.345781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.359970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.359988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.052 [2024-11-20 18:45:40.374409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.052 [2024-11-20 18:45:40.374434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.310 [2024-11-20 18:45:40.389495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.310 [2024-11-20 18:45:40.389515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.310 [2024-11-20 18:45:40.403435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.403454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.417207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.417226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.430917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.430936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.444875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.444893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.458829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.458848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.472662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.472681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.486925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.486943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.498320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.498339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.512547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.512571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.526894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.526913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.540871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.540889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.554601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.554620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.568664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.568682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.582799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.582818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.592805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.592822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.606782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.606801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.311 [2024-11-20 18:45:40.620488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.311 [2024-11-20 18:45:40.620507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.634399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.634419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.648450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.648470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.662255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.662274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.673728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.673747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.687691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.687710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.701983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.702001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.715815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.715833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.729421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.729439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.743308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.743326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.757142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.757160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.771106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.771124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 16828.67 IOPS, 131.47 MiB/s [2024-11-20T17:45:40.894Z] [2024-11-20 18:45:40.784890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.784909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.798528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.569 [2024-11-20 18:45:40.798546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.569 [2024-11-20 18:45:40.812763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.570 [2024-11-20 18:45:40.812780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.570 [2024-11-20 18:45:40.827753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.570 [2024-11-20 18:45:40.827771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.570 [2024-11-20 18:45:40.841574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.570 [2024-11-20 18:45:40.841592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.570 [2024-11-20 18:45:40.855174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.570 [2024-11-20 18:45:40.855192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.570 [2024-11-20 18:45:40.869100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.570 [2024-11-20 18:45:40.869119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.570 [2024-11-20 18:45:40.883191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.570 [2024-11-20 18:45:40.883214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.896986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.897006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.910774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.910792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.924698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.924717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.938809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.938827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.952694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.952716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.966722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.966740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.980493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.980512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:40.994120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:40.994138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.003163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.003181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.012424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.012441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.026593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.026611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.040133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.040151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.054014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.054032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.067506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.067524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.081150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.081168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.094943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.094962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.108387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.108406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.122547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.122564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.136420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.136438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.828 [2024-11-20 18:45:41.150005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.828 [2024-11-20 18:45:41.150024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.163804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.163823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.177665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.177683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.191011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.191030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.204754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.204776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.218607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.218626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.229506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.229523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.238881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.238899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.253633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.253651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.264810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.264828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.274407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.274425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.289175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.289193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.302293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.302311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.315899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.315918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.329727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.329746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.343591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.343612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.357514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.357534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.370844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.370865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.384914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.384933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.087 [2024-11-20 18:45:41.398918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.087 [2024-11-20 18:45:41.398937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.412918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.412939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.427189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.427214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.438267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.438286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.452073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.452097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.466207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.466227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.479764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.479782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.493788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.493807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.507433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.507452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.521311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.521330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.535019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.535040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.548661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.548681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.562818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.562838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.576867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.576885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.591104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.591122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.601775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.601793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.616236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.616254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.630243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.630262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.641446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.641464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.346 [2024-11-20 18:45:41.655757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.346 [2024-11-20 18:45:41.655776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.670145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.670166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.683959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.683978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.698058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.698078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.712117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.712140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.726317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.726336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.739791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.739809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.753637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.753655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.767553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.767571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 16837.00 IOPS, 131.54 MiB/s [2024-11-20T17:45:41.930Z] [2024-11-20 18:45:41.781067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.781085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.795027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.795044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.808643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.808661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.822735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.822754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.833660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.833678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.848579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.848598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.860153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.860171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.873642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.873660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.887151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.887170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.900616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.900633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.914136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.914154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.605 [2024-11-20 18:45:41.927885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.605 [2024-11-20 18:45:41.927906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:41.941703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:41.941722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:41.955317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:41.955336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:41.969435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:41.969453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:41.983315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:41.983333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:41.994428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:41.994446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.008359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.008377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.021997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.022015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.036067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.036085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.049675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.049693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.063224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.063258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.077018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.077035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.090893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.090911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.104368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.104386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.117907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.117925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.132049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.132066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.146231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.146249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.156990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.157008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.171377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.171396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.864 [2024-11-20 18:45:42.185099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.864 [2024-11-20 18:45:42.185118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.196496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.196516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.205934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.205952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.220397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.220416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.233856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.233874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.248435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.248453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.264159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.264177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.278282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.278300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.291995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.292013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.305941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.305959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.319862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.319881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.334005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.334023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.347904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.347923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.361798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.361816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.374927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.374945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.388294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.388312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.402323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.402341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.415943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.415961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.429605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.429623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.123 [2024-11-20 18:45:42.443661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.123 [2024-11-20 18:45:42.443685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.457336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.457355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.471481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.471500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.485458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.485477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.499399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.499416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.513127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.513145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.526809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.526827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.540599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.540618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.554618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.554636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.568878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.568896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.582973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.582991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.596908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.596926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.610967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.610985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.624878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.624911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.638813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.638831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.652720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.652738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.666376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.666394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.679932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.679950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.381 [2024-11-20 18:45:42.693552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.381 [2024-11-20 18:45:42.693570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.639 [2024-11-20 18:45:42.707671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.707692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.718957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.718976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.732938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.732964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.746349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.746370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.760215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.760235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.774057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.774076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 16858.20 IOPS, 131.70 MiB/s [2024-11-20T17:45:42.965Z] [2024-11-20 18:45:42.785468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.785487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 00:08:20.640 Latency(us) 00:08:20.640 [2024-11-20T17:45:42.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.640 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:20.640 Nvme1n1 : 5.01 16858.75 131.71 0.00 0.00 7584.89 3495.25 17850.76 00:08:20.640 [2024-11-20T17:45:42.965Z] =================================================================================================================== 00:08:20.640 [2024-11-20T17:45:42.965Z] Total : 16858.75 131.71 0.00 0.00 7584.89 3495.25 17850.76 00:08:20.640 [2024-11-20 18:45:42.796232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.796247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.808260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.808273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.820297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.820318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.832319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.832334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.844352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.844366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.856382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.856396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.868414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.868428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.880446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.880460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.892488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.892502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.904514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.904524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.916552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.916564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.928582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.928603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 [2024-11-20 18:45:42.940617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.640 [2024-11-20 18:45:42.940628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3520320) - No such process 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3520320 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.640 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.898 delay0 00:08:20.898 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.898 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:20.898 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.898 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.898 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.898 18:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:20.898 [2024-11-20 18:45:43.087060] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:27.457 [2024-11-20 18:45:49.183629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfa070 is same with the state(6) to be set 00:08:27.457 Initializing NVMe Controllers 00:08:27.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:27.457 Initialization complete. Launching workers. 00:08:27.457 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:08:27.457 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 38 00:08:27.457 success 214, unsuccessful 177, failed 0 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.457 rmmod nvme_tcp 00:08:27.457 rmmod nvme_fabrics 00:08:27.457 rmmod nvme_keyring 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3518493 ']' 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3518493 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3518493 ']' 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3518493 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3518493 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3518493' 00:08:27.457 killing process with pid 3518493 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3518493 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3518493 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.457 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.360 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.360 00:08:29.360 real 0m31.496s 00:08:29.360 user 0m42.110s 00:08:29.360 sys 0m11.036s 00:08:29.361 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.361 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.361 ************************************ 00:08:29.361 END TEST nvmf_zcopy 00:08:29.361 ************************************ 00:08:29.361 18:45:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.361 18:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.361 18:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.361 18:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.361 ************************************ 00:08:29.361 START TEST nvmf_nmic 00:08:29.361 ************************************ 00:08:29.361 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.621 * Looking for test storage... 00:08:29.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.621 --rc genhtml_branch_coverage=1 00:08:29.621 --rc genhtml_function_coverage=1 00:08:29.621 --rc genhtml_legend=1 00:08:29.621 --rc geninfo_all_blocks=1 00:08:29.621 --rc geninfo_unexecuted_blocks=1 00:08:29.621 00:08:29.621 ' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.621 --rc genhtml_branch_coverage=1 00:08:29.621 --rc genhtml_function_coverage=1 00:08:29.621 --rc genhtml_legend=1 00:08:29.621 --rc geninfo_all_blocks=1 00:08:29.621 --rc geninfo_unexecuted_blocks=1 00:08:29.621 00:08:29.621 ' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.621 --rc genhtml_branch_coverage=1 00:08:29.621 --rc genhtml_function_coverage=1 00:08:29.621 --rc genhtml_legend=1 00:08:29.621 --rc geninfo_all_blocks=1 00:08:29.621 --rc geninfo_unexecuted_blocks=1 00:08:29.621 00:08:29.621 ' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.621 --rc genhtml_branch_coverage=1 00:08:29.621 --rc genhtml_function_coverage=1 00:08:29.621 --rc genhtml_legend=1 00:08:29.621 --rc geninfo_all_blocks=1 00:08:29.621 --rc geninfo_unexecuted_blocks=1 00:08:29.621 00:08:29.621 ' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:29.621 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.622 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:36.192 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:36.192 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.192 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:36.192 Found net devices under 0000:86:00.0: cvl_0_0 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:36.193 Found net devices under 0000:86:00.1: cvl_0_1 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:36.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:08:36.193 00:08:36.193 --- 10.0.0.2 ping statistics --- 00:08:36.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.193 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:08:36.193 00:08:36.193 --- 10.0.0.1 ping statistics --- 00:08:36.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.193 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3525816 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3525816 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3525816 ']' 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.193 18:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.193 [2024-11-20 18:45:57.833734] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:08:36.193 [2024-11-20 18:45:57.833777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.193 [2024-11-20 18:45:57.912840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.193 [2024-11-20 18:45:57.953344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.193 [2024-11-20 18:45:57.953383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.193 [2024-11-20 18:45:57.953390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.193 [2024-11-20 18:45:57.953396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.193 [2024-11-20 18:45:57.953400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.193 [2024-11-20 18:45:57.954885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.193 [2024-11-20 18:45:57.954993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.193 [2024-11-20 18:45:57.955075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.193 [2024-11-20 18:45:57.955076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.451 [2024-11-20 18:45:58.702982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.451 Malloc0 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.451 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.451 [2024-11-20 18:45:58.773108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.708 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.708 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:36.708 test case1: single bdev can't be used in multiple subsystems 00:08:36.708 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:36.708 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.708 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.709 [2024-11-20 18:45:58.796996] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:36.709 [2024-11-20 18:45:58.797015] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:36.709 [2024-11-20 18:45:58.797022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.709 request: 00:08:36.709 { 00:08:36.709 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:36.709 "namespace": { 00:08:36.709 "bdev_name": "Malloc0", 00:08:36.709 "no_auto_visible": false 00:08:36.709 }, 00:08:36.709 "method": "nvmf_subsystem_add_ns", 00:08:36.709 "req_id": 1 00:08:36.709 } 00:08:36.709 Got JSON-RPC error response 00:08:36.709 response: 00:08:36.709 { 00:08:36.709 "code": -32602, 00:08:36.709 "message": "Invalid parameters" 00:08:36.709 } 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:36.709 Adding namespace failed - expected result. 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:36.709 test case2: host connect to nvmf target in multiple paths 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.709 [2024-11-20 18:45:58.809111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.709 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.652 18:45:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:39.022 18:46:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:39.022 18:46:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:39.022 18:46:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:39.022 18:46:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:39.022 18:46:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:40.917 18:46:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:40.917 18:46:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:40.917 18:46:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.917 18:46:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:40.917 18:46:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.917 18:46:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:40.917 18:46:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:40.917 [global] 00:08:40.917 thread=1 00:08:40.917 invalidate=1 00:08:40.917 rw=write 00:08:40.917 time_based=1 00:08:40.917 runtime=1 00:08:40.917 ioengine=libaio 00:08:40.917 direct=1 00:08:40.917 bs=4096 00:08:40.917 iodepth=1 00:08:40.917 norandommap=0 00:08:40.917 numjobs=1 00:08:40.917 00:08:40.917 verify_dump=1 00:08:40.917 verify_backlog=512 00:08:40.917 verify_state_save=0 00:08:40.917 do_verify=1 00:08:40.917 verify=crc32c-intel 00:08:40.917 [job0] 00:08:40.917 filename=/dev/nvme0n1 00:08:40.917 Could not set queue depth (nvme0n1) 00:08:41.175 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:41.175 fio-3.35 00:08:41.175 Starting 1 thread 00:08:42.548 00:08:42.548 job0: (groupid=0, jobs=1): err= 0: pid=3526901: Wed Nov 20 18:46:04 2024 00:08:42.548 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:08:42.548 slat (nsec): min=10127, max=27455, avg=22934.00, stdev=3078.83 00:08:42.548 clat (usec): min=40722, max=42014, avg=41147.27, stdev=399.55 00:08:42.548 lat (usec): min=40746, max=42038, avg=41170.20, stdev=398.30 00:08:42.548 clat percentiles (usec): 00:08:42.548 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:42.548 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:42.548 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:08:42.548 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:42.548 | 99.99th=[42206] 00:08:42.549 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:08:42.549 slat (usec): min=9, max=26612, avg=62.74, stdev=1175.63 00:08:42.549 clat (usec): min=112, max=338, avg=127.65, stdev=16.87 00:08:42.549 lat (usec): min=123, max=26931, avg=190.39, stdev=1184.21 00:08:42.549 clat percentiles (usec): 00:08:42.549 | 1.00th=[ 115], 5.00th=[ 117], 10.00th=[ 119], 20.00th=[ 120], 00:08:42.549 | 30.00th=[ 122], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 126], 00:08:42.549 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 145], 95.00th=[ 153], 00:08:42.549 | 99.00th=[ 169], 99.50th=[ 194], 99.90th=[ 338], 99.95th=[ 338], 00:08:42.549 | 99.99th=[ 338] 00:08:42.549 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:42.549 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:42.549 lat (usec) : 250=95.51%, 500=0.37% 00:08:42.549 lat (msec) : 50=4.12% 00:08:42.549 cpu : usr=0.30%, sys=0.50%, ctx=538, majf=0, minf=1 00:08:42.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.549 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.549 00:08:42.549 Run status group 0 (all jobs): 00:08:42.549 READ: bw=87.6KiB/s (89.7kB/s), 87.6KiB/s-87.6KiB/s (89.7kB/s-89.7kB/s), io=88.0KiB (90.1kB), run=1005-1005msec 00:08:42.549 WRITE: bw=2038KiB/s (2087kB/s), 2038KiB/s-2038KiB/s (2087kB/s-2087kB/s), io=2048KiB (2097kB), run=1005-1005msec 00:08:42.549 00:08:42.549 Disk stats (read/write): 00:08:42.549 nvme0n1: ios=71/512, merge=0/0, ticks=1253/61, in_queue=1314, util=98.80% 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.549 rmmod nvme_tcp 00:08:42.549 rmmod nvme_fabrics 00:08:42.549 rmmod nvme_keyring 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3525816 ']' 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3525816 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3525816 ']' 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3525816 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3525816 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3525816' 00:08:42.549 killing process with pid 3525816 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3525816 00:08:42.549 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3525816 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.808 18:46:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.345 00:08:45.345 real 0m15.444s 00:08:45.345 user 0m35.416s 00:08:45.345 sys 0m5.247s 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.345 ************************************ 00:08:45.345 END TEST nvmf_nmic 00:08:45.345 ************************************ 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.345 ************************************ 00:08:45.345 START TEST nvmf_fio_target 00:08:45.345 ************************************ 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:45.345 * Looking for test storage... 00:08:45.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.345 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.346 --rc genhtml_branch_coverage=1 00:08:45.346 --rc genhtml_function_coverage=1 00:08:45.346 --rc genhtml_legend=1 00:08:45.346 --rc geninfo_all_blocks=1 00:08:45.346 --rc geninfo_unexecuted_blocks=1 00:08:45.346 00:08:45.346 ' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.346 --rc genhtml_branch_coverage=1 00:08:45.346 --rc genhtml_function_coverage=1 00:08:45.346 --rc genhtml_legend=1 00:08:45.346 --rc geninfo_all_blocks=1 00:08:45.346 --rc geninfo_unexecuted_blocks=1 00:08:45.346 00:08:45.346 ' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.346 --rc genhtml_branch_coverage=1 00:08:45.346 --rc genhtml_function_coverage=1 00:08:45.346 --rc genhtml_legend=1 00:08:45.346 --rc geninfo_all_blocks=1 00:08:45.346 --rc geninfo_unexecuted_blocks=1 00:08:45.346 00:08:45.346 ' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.346 --rc genhtml_branch_coverage=1 00:08:45.346 --rc genhtml_function_coverage=1 00:08:45.346 --rc genhtml_legend=1 00:08:45.346 --rc geninfo_all_blocks=1 00:08:45.346 --rc geninfo_unexecuted_blocks=1 00:08:45.346 00:08:45.346 ' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.346 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.347 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.347 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.347 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:51.916 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:51.916 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:51.916 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:51.917 Found net devices under 0000:86:00.0: cvl_0_0 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:51.917 Found net devices under 0000:86:00.1: cvl_0_1 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:08:51.917 00:08:51.917 --- 10.0.0.2 ping statistics --- 00:08:51.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.917 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:08:51.917 00:08:51.917 --- 10.0.0.1 ping statistics --- 00:08:51.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.917 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3530668 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3530668 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3530668 ']' 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.917 [2024-11-20 18:46:13.383691] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:08:51.917 [2024-11-20 18:46:13.383738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.917 [2024-11-20 18:46:13.460195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.917 [2024-11-20 18:46:13.502329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.917 [2024-11-20 18:46:13.502368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.917 [2024-11-20 18:46:13.502376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.917 [2024-11-20 18:46:13.502382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.917 [2024-11-20 18:46:13.502387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.917 [2024-11-20 18:46:13.503862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.917 [2024-11-20 18:46:13.503972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.917 [2024-11-20 18:46:13.504078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.917 [2024-11-20 18:46:13.504079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.917 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:51.918 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.918 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.918 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.918 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.918 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.918 [2024-11-20 18:46:13.823001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.918 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.918 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:51.918 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.175 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:52.175 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.175 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:52.175 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.432 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:52.432 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:52.688 18:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.944 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:52.944 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.201 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:53.201 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.459 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:53.459 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:53.459 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.716 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:53.716 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.972 18:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:53.972 18:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.230 18:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.230 [2024-11-20 18:46:16.530458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.487 18:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:54.487 18:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:54.744 18:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.115 18:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:56.115 18:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:56.115 18:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.115 18:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:56.115 18:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:56.115 18:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:58.021 18:46:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:58.021 18:46:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:58.021 18:46:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.021 18:46:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:58.021 18:46:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.021 18:46:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:58.021 18:46:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:58.021 [global] 00:08:58.021 thread=1 00:08:58.021 invalidate=1 00:08:58.021 rw=write 00:08:58.021 time_based=1 00:08:58.021 runtime=1 00:08:58.021 ioengine=libaio 00:08:58.021 direct=1 00:08:58.021 bs=4096 00:08:58.021 iodepth=1 00:08:58.021 norandommap=0 00:08:58.021 numjobs=1 00:08:58.021 00:08:58.021 verify_dump=1 00:08:58.021 verify_backlog=512 00:08:58.021 verify_state_save=0 00:08:58.021 do_verify=1 00:08:58.021 verify=crc32c-intel 00:08:58.021 [job0] 00:08:58.021 filename=/dev/nvme0n1 00:08:58.021 [job1] 00:08:58.021 filename=/dev/nvme0n2 00:08:58.021 [job2] 00:08:58.021 filename=/dev/nvme0n3 00:08:58.021 [job3] 00:08:58.021 filename=/dev/nvme0n4 00:08:58.021 Could not set queue depth (nvme0n1) 00:08:58.021 Could not set queue depth (nvme0n2) 00:08:58.021 Could not set queue depth (nvme0n3) 00:08:58.021 Could not set queue depth (nvme0n4) 00:08:58.279 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.279 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.279 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.279 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.279 fio-3.35 00:08:58.279 Starting 4 threads 00:08:59.653 00:08:59.654 job0: (groupid=0, jobs=1): err= 0: pid=3532052: Wed Nov 20 18:46:21 2024 00:08:59.654 read: IOPS=66, BW=264KiB/s (271kB/s)(268KiB/1014msec) 00:08:59.654 slat (nsec): min=7489, max=31462, avg=14679.60, stdev=6677.22 00:08:59.654 clat (usec): min=180, max=41120, avg=13466.45, stdev=19129.59 00:08:59.654 lat (usec): min=188, max=41127, avg=13481.13, stdev=19134.33 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:08:59.654 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 225], 00:08:59.654 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:59.654 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:59.654 | 99.99th=[41157] 00:08:59.654 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:08:59.654 slat (nsec): min=9232, max=66631, avg=10988.65, stdev=3892.88 00:08:59.654 clat (usec): min=127, max=431, avg=201.79, stdev=28.47 00:08:59.654 lat (usec): min=138, max=441, avg=212.78, stdev=27.78 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 165], 20.00th=[ 186], 00:08:59.654 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:08:59.654 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 243], 00:08:59.654 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 433], 99.95th=[ 433], 00:08:59.654 | 99.99th=[ 433] 00:08:59.654 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:59.654 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:59.654 lat (usec) : 250=94.13%, 500=2.07% 00:08:59.654 lat (msec) : 50=3.80% 00:08:59.654 cpu : usr=0.39%, sys=0.49%, ctx=579, majf=0, minf=1 00:08:59.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 issued rwts: total=67,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.654 job1: (groupid=0, jobs=1): err= 0: pid=3532068: Wed Nov 20 18:46:21 2024 00:08:59.654 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:08:59.654 slat (nsec): min=12281, max=25484, avg=23798.45, stdev=2659.51 00:08:59.654 clat (usec): min=40661, max=41948, avg=40997.29, stdev=223.94 00:08:59.654 lat (usec): min=40674, max=41971, avg=41021.08, stdev=224.64 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:59.654 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:59.654 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:59.654 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:59.654 | 99.99th=[42206] 00:08:59.654 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:08:59.654 slat (nsec): min=10327, max=52885, avg=12869.11, stdev=3058.60 00:08:59.654 clat (usec): min=155, max=847, avg=209.25, stdev=37.62 00:08:59.654 lat (usec): min=169, max=863, avg=222.11, stdev=38.12 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:08:59.654 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:08:59.654 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 260], 00:08:59.654 | 99.00th=[ 285], 99.50th=[ 355], 99.90th=[ 848], 99.95th=[ 848], 00:08:59.654 | 99.99th=[ 848] 00:08:59.654 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:59.654 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:59.654 lat (usec) : 250=88.95%, 500=6.74%, 1000=0.19% 00:08:59.654 lat (msec) : 50=4.12% 00:08:59.654 cpu : usr=0.49%, sys=0.88%, ctx=534, majf=0, minf=1 00:08:59.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.654 job2: (groupid=0, jobs=1): err= 0: pid=3532087: Wed Nov 20 18:46:21 2024 00:08:59.654 read: IOPS=23, BW=92.5KiB/s (94.7kB/s)(96.0KiB/1038msec) 00:08:59.654 slat (nsec): min=9397, max=25238, avg=23051.08, stdev=3694.61 00:08:59.654 clat (usec): min=239, max=42095, avg=39314.41, stdev=8326.59 00:08:59.654 lat (usec): min=263, max=42118, avg=39337.46, stdev=8326.47 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[ 241], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:59.654 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:59.654 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:59.654 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:59.654 | 99.99th=[42206] 00:08:59.654 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:08:59.654 slat (nsec): min=10163, max=56939, avg=13988.82, stdev=3599.18 00:08:59.654 clat (usec): min=133, max=305, avg=165.43, stdev=26.10 00:08:59.654 lat (usec): min=144, max=325, avg=179.42, stdev=27.59 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:08:59.654 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:08:59.654 | 70.00th=[ 172], 80.00th=[ 186], 90.00th=[ 204], 95.00th=[ 217], 00:08:59.654 | 99.00th=[ 245], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 306], 00:08:59.654 | 99.99th=[ 306] 00:08:59.654 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:59.654 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:59.654 lat (usec) : 250=94.78%, 500=0.93% 00:08:59.654 lat (msec) : 50=4.29% 00:08:59.654 cpu : usr=0.39%, sys=0.77%, ctx=537, majf=0, minf=1 00:08:59.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.654 job3: (groupid=0, jobs=1): err= 0: pid=3532093: Wed Nov 20 18:46:21 2024 00:08:59.654 read: IOPS=2821, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:08:59.654 slat (nsec): min=2855, max=26405, avg=6471.79, stdev=1592.91 00:08:59.654 clat (usec): min=147, max=271, avg=184.72, stdev=14.86 00:08:59.654 lat (usec): min=151, max=276, avg=191.19, stdev=15.25 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:08:59.654 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:08:59.654 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:08:59.654 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 269], 00:08:59.654 | 99.99th=[ 273] 00:08:59.654 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:59.654 slat (usec): min=4, max=238, avg= 9.43, stdev= 5.14 00:08:59.654 clat (usec): min=104, max=365, avg=135.65, stdev=28.41 00:08:59.654 lat (usec): min=110, max=455, avg=145.09, stdev=30.11 00:08:59.654 clat percentiles (usec): 00:08:59.654 | 1.00th=[ 109], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:08:59.654 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:08:59.654 | 70.00th=[ 135], 80.00th=[ 149], 90.00th=[ 182], 95.00th=[ 202], 00:08:59.654 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 255], 99.95th=[ 306], 00:08:59.654 | 99.99th=[ 367] 00:08:59.654 bw ( KiB/s): min=12288, max=12288, per=69.20%, avg=12288.00, stdev= 0.00, samples=1 00:08:59.654 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:59.654 lat (usec) : 250=99.69%, 500=0.31% 00:08:59.654 cpu : usr=3.00%, sys=4.30%, ctx=5897, majf=0, minf=1 00:08:59.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.654 issued rwts: total=2824,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.655 00:08:59.655 Run status group 0 (all jobs): 00:08:59.655 READ: bw=11.1MiB/s (11.6MB/s), 86.4KiB/s-11.0MiB/s (88.5kB/s-11.6MB/s), io=11.5MiB (12.0MB), run=1001-1038msec 00:08:59.655 WRITE: bw=17.3MiB/s (18.2MB/s), 1973KiB/s-12.0MiB/s (2020kB/s-12.6MB/s), io=18.0MiB (18.9MB), run=1001-1038msec 00:08:59.655 00:08:59.655 Disk stats (read/write): 00:08:59.655 nvme0n1: ios=113/512, merge=0/0, ticks=765/96, in_queue=861, util=86.57% 00:08:59.655 nvme0n2: ios=22/512, merge=0/0, ticks=698/97, in_queue=795, util=86.89% 00:08:59.655 nvme0n3: ios=45/512, merge=0/0, ticks=1724/76, in_queue=1800, util=98.33% 00:08:59.655 nvme0n4: ios=2446/2560, merge=0/0, ticks=448/348, in_queue=796, util=89.71% 00:08:59.655 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:59.655 [global] 00:08:59.655 thread=1 00:08:59.655 invalidate=1 00:08:59.655 rw=randwrite 00:08:59.655 time_based=1 00:08:59.655 runtime=1 00:08:59.655 ioengine=libaio 00:08:59.655 direct=1 00:08:59.655 bs=4096 00:08:59.655 iodepth=1 00:08:59.655 norandommap=0 00:08:59.655 numjobs=1 00:08:59.655 00:08:59.655 verify_dump=1 00:08:59.655 verify_backlog=512 00:08:59.655 verify_state_save=0 00:08:59.655 do_verify=1 00:08:59.655 verify=crc32c-intel 00:08:59.655 [job0] 00:08:59.655 filename=/dev/nvme0n1 00:08:59.655 [job1] 00:08:59.655 filename=/dev/nvme0n2 00:08:59.655 [job2] 00:08:59.655 filename=/dev/nvme0n3 00:08:59.655 [job3] 00:08:59.655 filename=/dev/nvme0n4 00:08:59.655 Could not set queue depth (nvme0n1) 00:08:59.655 Could not set queue depth (nvme0n2) 00:08:59.655 Could not set queue depth (nvme0n3) 00:08:59.655 Could not set queue depth (nvme0n4) 00:08:59.912 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.913 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.913 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.913 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.913 fio-3.35 00:08:59.913 Starting 4 threads 00:09:01.315 00:09:01.315 job0: (groupid=0, jobs=1): err= 0: pid=3532550: Wed Nov 20 18:46:23 2024 00:09:01.315 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:01.315 slat (nsec): min=7644, max=35005, avg=8661.96, stdev=1101.13 00:09:01.315 clat (usec): min=160, max=408, avg=196.15, stdev=13.03 00:09:01.315 lat (usec): min=169, max=416, avg=204.81, stdev=13.06 00:09:01.315 clat percentiles (usec): 00:09:01.315 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:09:01.315 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 198], 00:09:01.315 | 70.00th=[ 202], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 217], 00:09:01.315 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 265], 99.95th=[ 265], 00:09:01.315 | 99.99th=[ 408] 00:09:01.315 write: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec); 0 zone resets 00:09:01.315 slat (nsec): min=11256, max=40192, avg=12329.89, stdev=1247.80 00:09:01.315 clat (usec): min=113, max=279, avg=145.20, stdev=16.46 00:09:01.315 lat (usec): min=125, max=319, avg=157.53, stdev=16.63 00:09:01.315 clat percentiles (usec): 00:09:01.315 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 135], 00:09:01.315 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:09:01.315 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:09:01.315 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 258], 99.95th=[ 265], 00:09:01.315 | 99.99th=[ 281] 00:09:01.315 bw ( KiB/s): min=12288, max=12288, per=48.49%, avg=12288.00, stdev= 0.00, samples=1 00:09:01.315 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:01.315 lat (usec) : 250=99.71%, 500=0.29% 00:09:01.315 cpu : usr=3.00%, sys=6.10%, ctx=5465, majf=0, minf=1 00:09:01.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.315 issued rwts: total=2560,2903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.315 job1: (groupid=0, jobs=1): err= 0: pid=3532563: Wed Nov 20 18:46:23 2024 00:09:01.315 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:09:01.315 slat (nsec): min=10465, max=22687, avg=12409.77, stdev=2921.85 00:09:01.315 clat (usec): min=40555, max=41919, avg=41014.48, stdev=225.93 00:09:01.315 lat (usec): min=40566, max=41931, avg=41026.89, stdev=226.09 00:09:01.315 clat percentiles (usec): 00:09:01.315 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:01.315 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.315 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:01.315 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:01.315 | 99.99th=[41681] 00:09:01.315 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:09:01.315 slat (nsec): min=10927, max=49971, avg=13366.80, stdev=3079.68 00:09:01.315 clat (usec): min=122, max=323, avg=213.98, stdev=33.49 00:09:01.315 lat (usec): min=134, max=357, avg=227.35, stdev=33.91 00:09:01.315 clat percentiles (usec): 00:09:01.315 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 176], 00:09:01.315 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:09:01.315 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 258], 00:09:01.315 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 326], 00:09:01.315 | 99.99th=[ 326] 00:09:01.315 bw ( KiB/s): min= 4096, max= 4096, per=16.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.315 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.315 lat (usec) : 250=87.08%, 500=8.80% 00:09:01.315 lat (msec) : 50=4.12% 00:09:01.315 cpu : usr=0.29%, sys=0.68%, ctx=535, majf=0, minf=1 00:09:01.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.315 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.315 job2: (groupid=0, jobs=1): err= 0: pid=3532582: Wed Nov 20 18:46:23 2024 00:09:01.315 read: IOPS=2034, BW=8139KiB/s (8334kB/s)(8196KiB/1007msec) 00:09:01.315 slat (nsec): min=6500, max=27701, avg=7361.27, stdev=1127.06 00:09:01.315 clat (usec): min=178, max=41098, avg=249.84, stdev=903.41 00:09:01.315 lat (usec): min=185, max=41107, avg=257.20, stdev=903.45 00:09:01.315 clat percentiles (usec): 00:09:01.315 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:09:01.315 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 231], 60.00th=[ 239], 00:09:01.315 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:09:01.316 | 99.00th=[ 277], 99.50th=[ 433], 99.90th=[ 586], 99.95th=[ 668], 00:09:01.316 | 99.99th=[41157] 00:09:01.316 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:09:01.316 slat (nsec): min=8879, max=53430, avg=10048.11, stdev=1564.43 00:09:01.316 clat (usec): min=120, max=419, avg=173.55, stdev=34.35 00:09:01.316 lat (usec): min=130, max=429, avg=183.60, stdev=34.56 00:09:01.316 clat percentiles (usec): 00:09:01.316 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:09:01.316 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 172], 00:09:01.316 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 227], 95.00th=[ 243], 00:09:01.316 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 322], 99.95th=[ 355], 00:09:01.316 | 99.99th=[ 420] 00:09:01.316 bw ( KiB/s): min=10192, max=10288, per=40.41%, avg=10240.00, stdev=67.88, samples=2 00:09:01.316 iops : min= 2548, max= 2572, avg=2560.00, stdev=16.97, samples=2 00:09:01.316 lat (usec) : 250=87.96%, 500=11.93%, 750=0.09% 00:09:01.316 lat (msec) : 50=0.02% 00:09:01.316 cpu : usr=2.09%, sys=4.37%, ctx=4609, majf=0, minf=2 00:09:01.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.316 issued rwts: total=2049,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.316 job3: (groupid=0, jobs=1): err= 0: pid=3532588: Wed Nov 20 18:46:23 2024 00:09:01.316 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:09:01.316 slat (nsec): min=7263, max=24247, avg=11222.96, stdev=3008.28 00:09:01.316 clat (usec): min=390, max=41823, avg=39254.00, stdev=8474.63 00:09:01.316 lat (usec): min=403, max=41847, avg=39265.22, stdev=8474.22 00:09:01.316 clat percentiles (usec): 00:09:01.316 | 1.00th=[ 392], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:01.316 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.316 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:01.316 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:01.316 | 99.99th=[41681] 00:09:01.316 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:01.316 slat (nsec): min=9763, max=44390, avg=12579.28, stdev=2400.99 00:09:01.316 clat (usec): min=138, max=331, avg=218.22, stdev=27.13 00:09:01.316 lat (usec): min=152, max=365, avg=230.80, stdev=27.53 00:09:01.316 clat percentiles (usec): 00:09:01.316 | 1.00th=[ 145], 5.00th=[ 167], 10.00th=[ 180], 20.00th=[ 196], 00:09:01.316 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:09:01.316 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 253], 00:09:01.316 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 334], 99.95th=[ 334], 00:09:01.316 | 99.99th=[ 334] 00:09:01.316 bw ( KiB/s): min= 4096, max= 4096, per=16.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.316 lat (usec) : 250=88.97%, 500=6.92% 00:09:01.316 lat (msec) : 50=4.11% 00:09:01.316 cpu : usr=0.39%, sys=0.98%, ctx=535, majf=0, minf=2 00:09:01.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.316 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.316 00:09:01.316 Run status group 0 (all jobs): 00:09:01.316 READ: bw=17.8MiB/s (18.6MB/s), 85.9KiB/s-9.99MiB/s (88.0kB/s-10.5MB/s), io=18.2MiB (19.1MB), run=1001-1024msec 00:09:01.316 WRITE: bw=24.7MiB/s (25.9MB/s), 2000KiB/s-11.3MiB/s (2048kB/s-11.9MB/s), io=25.3MiB (26.6MB), run=1001-1024msec 00:09:01.316 00:09:01.316 Disk stats (read/write): 00:09:01.316 nvme0n1: ios=2142/2560, merge=0/0, ticks=1296/375, in_queue=1671, util=99.30% 00:09:01.316 nvme0n2: ios=56/512, merge=0/0, ticks=1742/102, in_queue=1844, util=96.14% 00:09:01.316 nvme0n3: ios=1999/2048, merge=0/0, ticks=468/350, in_queue=818, util=90.33% 00:09:01.316 nvme0n4: ios=74/512, merge=0/0, ticks=719/113, in_queue=832, util=90.88% 00:09:01.316 18:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:01.316 [global] 00:09:01.316 thread=1 00:09:01.316 invalidate=1 00:09:01.316 rw=write 00:09:01.316 time_based=1 00:09:01.316 runtime=1 00:09:01.316 ioengine=libaio 00:09:01.316 direct=1 00:09:01.316 bs=4096 00:09:01.316 iodepth=128 00:09:01.316 norandommap=0 00:09:01.316 numjobs=1 00:09:01.316 00:09:01.316 verify_dump=1 00:09:01.316 verify_backlog=512 00:09:01.316 verify_state_save=0 00:09:01.316 do_verify=1 00:09:01.316 verify=crc32c-intel 00:09:01.316 [job0] 00:09:01.316 filename=/dev/nvme0n1 00:09:01.316 [job1] 00:09:01.316 filename=/dev/nvme0n2 00:09:01.316 [job2] 00:09:01.316 filename=/dev/nvme0n3 00:09:01.316 [job3] 00:09:01.316 filename=/dev/nvme0n4 00:09:01.316 Could not set queue depth (nvme0n1) 00:09:01.316 Could not set queue depth (nvme0n2) 00:09:01.316 Could not set queue depth (nvme0n3) 00:09:01.316 Could not set queue depth (nvme0n4) 00:09:01.578 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.578 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.578 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.578 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.578 fio-3.35 00:09:01.578 Starting 4 threads 00:09:02.954 00:09:02.954 job0: (groupid=0, jobs=1): err= 0: pid=3532984: Wed Nov 20 18:46:24 2024 00:09:02.954 read: IOPS=2867, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1004msec) 00:09:02.954 slat (nsec): min=1659, max=23101k, avg=211132.78, stdev=1113182.24 00:09:02.954 clat (usec): min=849, max=74090, avg=23704.63, stdev=9352.50 00:09:02.954 lat (usec): min=4920, max=74102, avg=23915.76, stdev=9446.26 00:09:02.954 clat percentiles (usec): 00:09:02.954 | 1.00th=[ 8848], 5.00th=[11863], 10.00th=[15795], 20.00th=[18220], 00:09:02.954 | 30.00th=[19268], 40.00th=[20317], 50.00th=[22414], 60.00th=[24249], 00:09:02.954 | 70.00th=[26084], 80.00th=[26608], 90.00th=[31065], 95.00th=[46924], 00:09:02.954 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:09:02.954 | 99.99th=[73925] 00:09:02.954 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:02.954 slat (usec): min=2, max=8940, avg=121.00, stdev=565.03 00:09:02.954 clat (usec): min=8607, max=75691, avg=19099.29, stdev=10307.09 00:09:02.954 lat (usec): min=8617, max=75715, avg=19220.29, stdev=10329.83 00:09:02.954 clat percentiles (usec): 00:09:02.954 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[11338], 20.00th=[11994], 00:09:02.954 | 30.00th=[12780], 40.00th=[15008], 50.00th=[15795], 60.00th=[19006], 00:09:02.954 | 70.00th=[21365], 80.00th=[21890], 90.00th=[27132], 95.00th=[45876], 00:09:02.954 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[70779], 00:09:02.954 | 99.99th=[76022] 00:09:02.954 bw ( KiB/s): min=12288, max=12288, per=16.50%, avg=12288.00, stdev= 0.00, samples=2 00:09:02.954 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:02.954 lat (usec) : 1000=0.02% 00:09:02.954 lat (msec) : 10=2.77%, 20=47.19%, 50=47.40%, 100=2.62% 00:09:02.954 cpu : usr=2.49%, sys=4.79%, ctx=361, majf=0, minf=1 00:09:02.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:02.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.954 issued rwts: total=2879,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.954 job1: (groupid=0, jobs=1): err= 0: pid=3532985: Wed Nov 20 18:46:24 2024 00:09:02.954 read: IOPS=6238, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1005msec) 00:09:02.954 slat (nsec): min=1342, max=9563.2k, avg=81636.89, stdev=588802.60 00:09:02.954 clat (usec): min=779, max=19867, avg=10287.48, stdev=2402.41 00:09:02.954 lat (usec): min=3391, max=19895, avg=10369.12, stdev=2445.07 00:09:02.954 clat percentiles (usec): 00:09:02.954 | 1.00th=[ 5014], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 8979], 00:09:02.954 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:09:02.954 | 70.00th=[10683], 80.00th=[11338], 90.00th=[13829], 95.00th=[15664], 00:09:02.954 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19530], 99.95th=[19530], 00:09:02.954 | 99.99th=[19792] 00:09:02.954 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:09:02.954 slat (usec): min=2, max=18094, avg=68.07, stdev=501.85 00:09:02.954 clat (usec): min=2460, max=33981, avg=9460.48, stdev=2599.85 00:09:02.954 lat (usec): min=2467, max=34004, avg=9528.55, stdev=2654.99 00:09:02.954 clat percentiles (usec): 00:09:02.954 | 1.00th=[ 3163], 5.00th=[ 5080], 10.00th=[ 6915], 20.00th=[ 8225], 00:09:02.954 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:09:02.954 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[11207], 95.00th=[15008], 00:09:02.954 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20055], 99.95th=[20579], 00:09:02.954 | 99.99th=[33817] 00:09:02.954 bw ( KiB/s): min=25456, max=27776, per=35.74%, avg=26616.00, stdev=1640.49, samples=2 00:09:02.954 iops : min= 6364, max= 6944, avg=6654.00, stdev=410.12, samples=2 00:09:02.954 lat (usec) : 1000=0.01% 00:09:02.954 lat (msec) : 4=1.67%, 10=65.34%, 20=32.18%, 50=0.80% 00:09:02.954 cpu : usr=5.48%, sys=7.07%, ctx=645, majf=0, minf=1 00:09:02.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:02.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.954 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.954 job2: (groupid=0, jobs=1): err= 0: pid=3532986: Wed Nov 20 18:46:24 2024 00:09:02.954 read: IOPS=4716, BW=18.4MiB/s (19.3MB/s)(18.4MiB/1001msec) 00:09:02.954 slat (nsec): min=1491, max=21109k, avg=102569.41, stdev=626230.21 00:09:02.954 clat (usec): min=525, max=54698, avg=13133.56, stdev=6429.59 00:09:02.954 lat (usec): min=2057, max=54708, avg=13236.13, stdev=6449.14 00:09:02.954 clat percentiles (usec): 00:09:02.954 | 1.00th=[ 4948], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:09:02.954 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12387], 00:09:02.954 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13698], 95.00th=[14484], 00:09:02.954 | 99.00th=[51119], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:09:02.954 | 99.99th=[54789] 00:09:02.954 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:02.954 slat (usec): min=2, max=15951, avg=95.57, stdev=516.06 00:09:02.954 clat (usec): min=6918, max=37397, avg=12419.08, stdev=3931.47 00:09:02.954 lat (usec): min=6927, max=37408, avg=12514.65, stdev=3930.46 00:09:02.954 clat percentiles (usec): 00:09:02.954 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10945], 00:09:02.954 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:09:02.954 | 70.00th=[11994], 80.00th=[12256], 90.00th=[13304], 95.00th=[18482], 00:09:02.954 | 99.00th=[35914], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:09:02.954 | 99.99th=[37487] 00:09:02.954 bw ( KiB/s): min=17960, max=17960, per=24.11%, avg=17960.00, stdev= 0.00, samples=1 00:09:02.955 iops : min= 4490, max= 4490, avg=4490.00, stdev= 0.00, samples=1 00:09:02.955 lat (usec) : 750=0.01% 00:09:02.955 lat (msec) : 4=0.16%, 10=7.17%, 20=88.16%, 50=3.86%, 100=0.63% 00:09:02.955 cpu : usr=3.90%, sys=5.40%, ctx=524, majf=0, minf=1 00:09:02.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:02.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.955 issued rwts: total=4721,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.955 job3: (groupid=0, jobs=1): err= 0: pid=3532987: Wed Nov 20 18:46:24 2024 00:09:02.955 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:09:02.955 slat (nsec): min=1128, max=11588k, avg=149694.46, stdev=842929.16 00:09:02.955 clat (usec): min=5315, max=38065, avg=19610.98, stdev=7635.79 00:09:02.955 lat (usec): min=5322, max=38074, avg=19760.68, stdev=7669.76 00:09:02.955 clat percentiles (usec): 00:09:02.955 | 1.00th=[ 6128], 5.00th=[10945], 10.00th=[11338], 20.00th=[12256], 00:09:02.955 | 30.00th=[13173], 40.00th=[15795], 50.00th=[20317], 60.00th=[21627], 00:09:02.955 | 70.00th=[22152], 80.00th=[25297], 90.00th=[32375], 95.00th=[34866], 00:09:02.955 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:09:02.955 | 99.99th=[38011] 00:09:02.955 write: IOPS=3853, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1003msec); 0 zone resets 00:09:02.955 slat (nsec): min=1957, max=15610k, avg=108046.05, stdev=694649.02 00:09:02.955 clat (usec): min=561, max=38920, avg=14732.47, stdev=5675.56 00:09:02.955 lat (usec): min=569, max=38924, avg=14840.52, stdev=5703.89 00:09:02.955 clat percentiles (usec): 00:09:02.955 | 1.00th=[ 1401], 5.00th=[ 4817], 10.00th=[ 8291], 20.00th=[ 9503], 00:09:02.955 | 30.00th=[10945], 40.00th=[13435], 50.00th=[16319], 60.00th=[17171], 00:09:02.955 | 70.00th=[17695], 80.00th=[18482], 90.00th=[20055], 95.00th=[21627], 00:09:02.955 | 99.00th=[32375], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:09:02.955 | 99.99th=[39060] 00:09:02.955 bw ( KiB/s): min=11760, max=18144, per=20.08%, avg=14952.00, stdev=4514.17, samples=2 00:09:02.955 iops : min= 2940, max= 4536, avg=3738.00, stdev=1128.54, samples=2 00:09:02.955 lat (usec) : 750=0.04%, 1000=0.17% 00:09:02.955 lat (msec) : 2=0.32%, 4=1.28%, 10=12.66%, 20=55.51%, 50=30.02% 00:09:02.955 cpu : usr=3.89%, sys=4.59%, ctx=302, majf=0, minf=2 00:09:02.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:02.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.955 issued rwts: total=3584,3865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.955 00:09:02.955 Run status group 0 (all jobs): 00:09:02.955 READ: bw=67.8MiB/s (71.1MB/s), 11.2MiB/s-24.4MiB/s (11.7MB/s-25.6MB/s), io=68.2MiB (71.5MB), run=1001-1005msec 00:09:02.955 WRITE: bw=72.7MiB/s (76.3MB/s), 12.0MiB/s-25.9MiB/s (12.5MB/s-27.1MB/s), io=73.1MiB (76.6MB), run=1001-1005msec 00:09:02.955 00:09:02.955 Disk stats (read/write): 00:09:02.955 nvme0n1: ios=2391/2560, merge=0/0, ticks=20378/14729, in_queue=35107, util=87.17% 00:09:02.955 nvme0n2: ios=5289/5632, merge=0/0, ticks=51838/52654, in_queue=104492, util=91.18% 00:09:02.955 nvme0n3: ios=4144/4136, merge=0/0, ticks=14198/12163, in_queue=26361, util=94.60% 00:09:02.955 nvme0n4: ios=3148/3584, merge=0/0, ticks=21521/27062, in_queue=48583, util=94.25% 00:09:02.955 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:02.955 [global] 00:09:02.955 thread=1 00:09:02.955 invalidate=1 00:09:02.955 rw=randwrite 00:09:02.955 time_based=1 00:09:02.955 runtime=1 00:09:02.955 ioengine=libaio 00:09:02.955 direct=1 00:09:02.955 bs=4096 00:09:02.955 iodepth=128 00:09:02.955 norandommap=0 00:09:02.955 numjobs=1 00:09:02.955 00:09:02.955 verify_dump=1 00:09:02.955 verify_backlog=512 00:09:02.955 verify_state_save=0 00:09:02.955 do_verify=1 00:09:02.955 verify=crc32c-intel 00:09:02.955 [job0] 00:09:02.955 filename=/dev/nvme0n1 00:09:02.955 [job1] 00:09:02.955 filename=/dev/nvme0n2 00:09:02.955 [job2] 00:09:02.955 filename=/dev/nvme0n3 00:09:02.955 [job3] 00:09:02.955 filename=/dev/nvme0n4 00:09:02.955 Could not set queue depth (nvme0n1) 00:09:02.955 Could not set queue depth (nvme0n2) 00:09:02.955 Could not set queue depth (nvme0n3) 00:09:02.955 Could not set queue depth (nvme0n4) 00:09:02.955 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.955 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.955 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.955 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.955 fio-3.35 00:09:02.955 Starting 4 threads 00:09:04.331 00:09:04.331 job0: (groupid=0, jobs=1): err= 0: pid=3533364: Wed Nov 20 18:46:26 2024 00:09:04.331 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:09:04.331 slat (nsec): min=1463, max=9823.9k, avg=95621.82, stdev=683936.65 00:09:04.331 clat (usec): min=3608, max=49770, avg=12269.78, stdev=5388.72 00:09:04.331 lat (usec): min=3613, max=49777, avg=12365.40, stdev=5450.17 00:09:04.331 clat percentiles (usec): 00:09:04.331 | 1.00th=[ 4359], 5.00th=[ 4948], 10.00th=[ 6325], 20.00th=[ 8979], 00:09:04.331 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11600], 60.00th=[13435], 00:09:04.331 | 70.00th=[14353], 80.00th=[15270], 90.00th=[16188], 95.00th=[17957], 00:09:04.331 | 99.00th=[37487], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:09:04.331 | 99.99th=[49546] 00:09:04.331 write: IOPS=5008, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1009msec); 0 zone resets 00:09:04.331 slat (nsec): min=1887, max=11069k, avg=90168.56, stdev=574569.44 00:09:04.331 clat (usec): min=704, max=55907, avg=14133.13, stdev=10615.00 00:09:04.331 lat (usec): min=711, max=55915, avg=14223.30, stdev=10688.35 00:09:04.331 clat percentiles (usec): 00:09:04.331 | 1.00th=[ 3392], 5.00th=[ 4555], 10.00th=[ 5669], 20.00th=[ 7242], 00:09:04.331 | 30.00th=[ 8225], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[11863], 00:09:04.331 | 70.00th=[14353], 80.00th=[20317], 90.00th=[30016], 95.00th=[38536], 00:09:04.331 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:09:04.331 | 99.99th=[55837] 00:09:04.331 bw ( KiB/s): min=15864, max=23552, per=28.78%, avg=19708.00, stdev=5436.24, samples=2 00:09:04.331 iops : min= 3966, max= 5888, avg=4927.00, stdev=1359.06, samples=2 00:09:04.331 lat (usec) : 750=0.08% 00:09:04.331 lat (msec) : 2=0.01%, 4=2.00%, 10=39.83%, 20=45.25%, 50=12.01% 00:09:04.331 lat (msec) : 100=0.83% 00:09:04.331 cpu : usr=2.48%, sys=6.45%, ctx=408, majf=0, minf=1 00:09:04.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:04.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.332 issued rwts: total=4608,5054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.332 job1: (groupid=0, jobs=1): err= 0: pid=3533365: Wed Nov 20 18:46:26 2024 00:09:04.332 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:09:04.332 slat (nsec): min=1067, max=32120k, avg=79637.57, stdev=675705.41 00:09:04.332 clat (usec): min=1844, max=65272, avg=10586.94, stdev=7366.95 00:09:04.332 lat (usec): min=1848, max=65274, avg=10666.58, stdev=7402.36 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 2442], 5.00th=[ 4178], 10.00th=[ 6128], 20.00th=[ 7963], 00:09:04.332 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10028], 00:09:04.332 | 70.00th=[10159], 80.00th=[10683], 90.00th=[13304], 95.00th=[23987], 00:09:04.332 | 99.00th=[47973], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:09:04.332 | 99.99th=[65274] 00:09:04.332 write: IOPS=6292, BW=24.6MiB/s (25.8MB/s)(24.7MiB/1004msec); 0 zone resets 00:09:04.332 slat (nsec): min=1801, max=8962.4k, avg=75269.02, stdev=415229.74 00:09:04.332 clat (usec): min=2290, max=32748, avg=9777.43, stdev=4100.40 00:09:04.332 lat (usec): min=2298, max=32750, avg=9852.70, stdev=4112.61 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 3195], 5.00th=[ 5080], 10.00th=[ 6063], 20.00th=[ 7767], 00:09:04.332 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9896], 00:09:04.332 | 70.00th=[10159], 80.00th=[10421], 90.00th=[12780], 95.00th=[19006], 00:09:04.332 | 99.00th=[25560], 99.50th=[26346], 99.90th=[32637], 99.95th=[32637], 00:09:04.332 | 99.99th=[32637] 00:09:04.332 bw ( KiB/s): min=21888, max=27640, per=36.16%, avg=24764.00, stdev=4067.28, samples=2 00:09:04.332 iops : min= 5472, max= 6910, avg=6191.00, stdev=1016.82, samples=2 00:09:04.332 lat (msec) : 2=0.17%, 4=3.88%, 10=59.56%, 20=30.67%, 50=5.30% 00:09:04.332 lat (msec) : 100=0.43% 00:09:04.332 cpu : usr=2.79%, sys=5.98%, ctx=599, majf=0, minf=1 00:09:04.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:04.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.332 issued rwts: total=6144,6318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.332 job2: (groupid=0, jobs=1): err= 0: pid=3533366: Wed Nov 20 18:46:26 2024 00:09:04.332 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:09:04.332 slat (nsec): min=1408, max=13965k, avg=198584.58, stdev=1007252.71 00:09:04.332 clat (usec): min=9266, max=66453, avg=24188.24, stdev=14218.22 00:09:04.332 lat (usec): min=9665, max=66460, avg=24386.83, stdev=14294.62 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[10028], 5.00th=[11076], 10.00th=[12256], 20.00th=[12780], 00:09:04.332 | 30.00th=[13960], 40.00th=[17171], 50.00th=[19530], 60.00th=[22152], 00:09:04.332 | 70.00th=[25822], 80.00th=[33424], 90.00th=[51643], 95.00th=[55837], 00:09:04.332 | 99.00th=[65799], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:09:04.332 | 99.99th=[66323] 00:09:04.332 write: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1010msec); 0 zone resets 00:09:04.332 slat (nsec): min=1948, max=17722k, avg=168046.96, stdev=1016564.18 00:09:04.332 clat (usec): min=461, max=56179, avg=23048.68, stdev=10370.71 00:09:04.332 lat (usec): min=8943, max=56188, avg=23216.72, stdev=10372.73 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12256], 20.00th=[12780], 00:09:04.332 | 30.00th=[16581], 40.00th=[19006], 50.00th=[22152], 60.00th=[24249], 00:09:04.332 | 70.00th=[27132], 80.00th=[30278], 90.00th=[36439], 95.00th=[43779], 00:09:04.332 | 99.00th=[54264], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:09:04.332 | 99.99th=[56361] 00:09:04.332 bw ( KiB/s): min= 8624, max=13144, per=15.89%, avg=10884.00, stdev=3196.12, samples=2 00:09:04.332 iops : min= 2156, max= 3286, avg=2721.00, stdev=799.03, samples=2 00:09:04.332 lat (usec) : 500=0.02% 00:09:04.332 lat (msec) : 10=1.31%, 20=45.81%, 50=45.90%, 100=6.95% 00:09:04.332 cpu : usr=2.48%, sys=4.76%, ctx=245, majf=0, minf=2 00:09:04.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:04.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.332 issued rwts: total=2560,2849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.332 job3: (groupid=0, jobs=1): err= 0: pid=3533367: Wed Nov 20 18:46:26 2024 00:09:04.332 read: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1010msec) 00:09:04.332 slat (nsec): min=1358, max=21432k, avg=157314.54, stdev=1073691.16 00:09:04.332 clat (usec): min=5049, max=73008, avg=17314.58, stdev=10006.41 00:09:04.332 lat (usec): min=5243, max=73016, avg=17471.89, stdev=10117.59 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 7308], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10945], 00:09:04.332 | 30.00th=[11469], 40.00th=[13173], 50.00th=[13435], 60.00th=[14746], 00:09:04.332 | 70.00th=[15926], 80.00th=[21103], 90.00th=[32637], 95.00th=[41681], 00:09:04.332 | 99.00th=[52167], 99.50th=[57410], 99.90th=[72877], 99.95th=[72877], 00:09:04.332 | 99.99th=[72877] 00:09:04.332 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:09:04.332 slat (nsec): min=1877, max=12921k, avg=163883.19, stdev=706634.33 00:09:04.332 clat (usec): min=3207, max=74561, avg=24418.87, stdev=15071.54 00:09:04.332 lat (usec): min=3217, max=74570, avg=24582.75, stdev=15156.66 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 4752], 5.00th=[ 6194], 10.00th=[ 8029], 20.00th=[10945], 00:09:04.332 | 30.00th=[12911], 40.00th=[18744], 50.00th=[22414], 60.00th=[25560], 00:09:04.332 | 70.00th=[28705], 80.00th=[37487], 90.00th=[44303], 95.00th=[54264], 00:09:04.332 | 99.00th=[69731], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:09:04.332 | 99.99th=[74974] 00:09:04.332 bw ( KiB/s): min=10960, max=13616, per=17.94%, avg=12288.00, stdev=1878.08, samples=2 00:09:04.332 iops : min= 2740, max= 3404, avg=3072.00, stdev=469.52, samples=2 00:09:04.332 lat (msec) : 4=0.10%, 10=11.49%, 20=47.70%, 50=36.73%, 100=3.98% 00:09:04.332 cpu : usr=2.28%, sys=4.16%, ctx=371, majf=0, minf=1 00:09:04.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:04.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.332 issued rwts: total=3056,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.332 00:09:04.332 Run status group 0 (all jobs): 00:09:04.332 READ: bw=63.3MiB/s (66.4MB/s), 9.90MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=63.9MiB (67.0MB), run=1004-1010msec 00:09:04.332 WRITE: bw=66.9MiB/s (70.1MB/s), 11.0MiB/s-24.6MiB/s (11.6MB/s-25.8MB/s), io=67.6MiB (70.8MB), run=1004-1010msec 00:09:04.332 00:09:04.332 Disk stats (read/write): 00:09:04.332 nvme0n1: ios=3491/3584, merge=0/0, ticks=32026/42118, in_queue=74144, util=97.89% 00:09:04.332 nvme0n2: ios=5156/5326, merge=0/0, ticks=22951/19866, in_queue=42817, util=98.05% 00:09:04.332 nvme0n3: ios=2048/2474, merge=0/0, ticks=12910/12239, in_queue=25149, util=87.68% 00:09:04.332 nvme0n4: ios=2048/2519, merge=0/0, ticks=26162/56047, in_queue=82209, util=88.89% 00:09:04.332 18:46:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:04.332 18:46:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3533593 00:09:04.332 18:46:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:04.332 18:46:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:04.332 [global] 00:09:04.332 thread=1 00:09:04.332 invalidate=1 00:09:04.332 rw=read 00:09:04.332 time_based=1 00:09:04.332 runtime=10 00:09:04.332 ioengine=libaio 00:09:04.332 direct=1 00:09:04.332 bs=4096 00:09:04.332 iodepth=1 00:09:04.332 norandommap=1 00:09:04.332 numjobs=1 00:09:04.332 00:09:04.332 [job0] 00:09:04.332 filename=/dev/nvme0n1 00:09:04.332 [job1] 00:09:04.332 filename=/dev/nvme0n2 00:09:04.332 [job2] 00:09:04.332 filename=/dev/nvme0n3 00:09:04.332 [job3] 00:09:04.332 filename=/dev/nvme0n4 00:09:04.332 Could not set queue depth (nvme0n1) 00:09:04.332 Could not set queue depth (nvme0n2) 00:09:04.332 Could not set queue depth (nvme0n3) 00:09:04.332 Could not set queue depth (nvme0n4) 00:09:04.591 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.591 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.591 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.591 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.591 fio-3.35 00:09:04.591 Starting 4 threads 00:09:07.903 18:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:07.903 18:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:07.903 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:09:07.903 fio: pid=3533741, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:07.903 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:09:07.903 fio: pid=3533740, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:07.903 18:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.903 18:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:07.903 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.903 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:07.903 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=319488, buflen=4096 00:09:07.903 fio: pid=3533733, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:08.183 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.183 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:08.183 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52404224, buflen=4096 00:09:08.183 fio: pid=3533734, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:08.183 00:09:08.183 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3533733: Wed Nov 20 18:46:30 2024 00:09:08.183 read: IOPS=25, BW=99.6KiB/s (102kB/s)(312KiB/3133msec) 00:09:08.183 slat (usec): min=10, max=20820, avg=440.63, stdev=2735.48 00:09:08.183 clat (usec): min=292, max=41933, avg=39449.86, stdev=7860.35 00:09:08.183 lat (usec): min=312, max=53909, avg=39895.86, stdev=6985.64 00:09:08.183 clat percentiles (usec): 00:09:08.183 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:08.183 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:08.183 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:08.183 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:08.183 | 99.99th=[41681] 00:09:08.183 bw ( KiB/s): min= 96, max= 104, per=0.64%, avg=99.50, stdev= 3.99, samples=6 00:09:08.183 iops : min= 24, max= 26, avg=24.83, stdev= 0.98, samples=6 00:09:08.183 lat (usec) : 500=3.80% 00:09:08.183 lat (msec) : 50=94.94% 00:09:08.183 cpu : usr=0.00%, sys=0.10%, ctx=81, majf=0, minf=1 00:09:08.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.183 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3533734: Wed Nov 20 18:46:30 2024 00:09:08.183 read: IOPS=3793, BW=14.8MiB/s (15.5MB/s)(50.0MiB/3373msec) 00:09:08.183 slat (usec): min=5, max=20901, avg=10.93, stdev=218.42 00:09:08.183 clat (usec): min=160, max=48166, avg=248.98, stdev=1413.23 00:09:08.183 lat (usec): min=173, max=53998, avg=258.28, stdev=1453.41 00:09:08.183 clat percentiles (usec): 00:09:08.183 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:09:08.183 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:09:08.183 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 225], 00:09:08.183 | 99.00th=[ 239], 99.50th=[ 253], 99.90th=[40633], 99.95th=[41157], 00:09:08.183 | 99.99th=[41157] 00:09:08.183 bw ( KiB/s): min= 6070, max=19112, per=100.00%, avg=16862.33, stdev=5288.33, samples=6 00:09:08.183 iops : min= 1517, max= 4778, avg=4215.50, stdev=1322.29, samples=6 00:09:08.183 lat (usec) : 250=99.45%, 500=0.40%, 750=0.01%, 1000=0.01% 00:09:08.183 lat (msec) : 2=0.01%, 50=0.12% 00:09:08.183 cpu : usr=1.75%, sys=6.32%, ctx=12800, majf=0, minf=2 00:09:08.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 issued rwts: total=12795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.183 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3533740: Wed Nov 20 18:46:30 2024 00:09:08.183 read: IOPS=25, BW=99.1KiB/s (101kB/s)(288KiB/2907msec) 00:09:08.183 slat (usec): min=9, max=104, avg=22.58, stdev=11.14 00:09:08.183 clat (usec): min=338, max=42045, avg=39948.73, stdev=6744.60 00:09:08.183 lat (usec): min=368, max=42069, avg=39971.30, stdev=6743.46 00:09:08.183 clat percentiles (usec): 00:09:08.183 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:08.183 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:08.183 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:08.183 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:08.183 | 99.99th=[42206] 00:09:08.183 bw ( KiB/s): min= 96, max= 112, per=0.64%, avg=99.20, stdev= 7.16, samples=5 00:09:08.183 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:09:08.183 lat (usec) : 500=2.74% 00:09:08.183 lat (msec) : 50=95.89% 00:09:08.183 cpu : usr=0.10%, sys=0.00%, ctx=77, majf=0, minf=2 00:09:08.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.183 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3533741: Wed Nov 20 18:46:30 2024 00:09:08.183 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2728msec) 00:09:08.183 slat (nsec): min=13469, max=34655, avg=22840.34, stdev=2048.34 00:09:08.183 clat (usec): min=347, max=41086, avg=40360.93, stdev=4962.92 00:09:08.183 lat (usec): min=381, max=41112, avg=40383.77, stdev=4961.47 00:09:08.183 clat percentiles (usec): 00:09:08.183 | 1.00th=[ 347], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:08.183 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:08.183 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:08.183 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:08.183 | 99.99th=[41157] 00:09:08.183 bw ( KiB/s): min= 96, max= 104, per=0.64%, avg=99.20, stdev= 4.38, samples=5 00:09:08.183 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:08.183 lat (usec) : 500=1.47% 00:09:08.183 lat (msec) : 50=97.06% 00:09:08.183 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:09:08.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.183 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.183 00:09:08.183 Run status group 0 (all jobs): 00:09:08.183 READ: bw=15.1MiB/s (15.8MB/s), 98.2KiB/s-14.8MiB/s (101kB/s-15.5MB/s), io=50.8MiB (53.3MB), run=2728-3373msec 00:09:08.183 00:09:08.183 Disk stats (read/write): 00:09:08.183 nvme0n1: ios=77/0, merge=0/0, ticks=3037/0, in_queue=3037, util=94.51% 00:09:08.183 nvme0n2: ios=12794/0, merge=0/0, ticks=3040/0, in_queue=3040, util=95.40% 00:09:08.183 nvme0n3: ios=115/0, merge=0/0, ticks=3802/0, in_queue=3802, util=99.70% 00:09:08.183 nvme0n4: ios=64/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.44% 00:09:08.479 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.479 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:08.479 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.479 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:08.738 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.738 18:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:08.996 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.996 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3533593 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:09.254 nvmf hotplug test: fio failed as expected 00:09:09.254 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.513 rmmod nvme_tcp 00:09:09.513 rmmod nvme_fabrics 00:09:09.513 rmmod nvme_keyring 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3530668 ']' 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3530668 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3530668 ']' 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3530668 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3530668 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3530668' 00:09:09.513 killing process with pid 3530668 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3530668 00:09:09.513 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3530668 00:09:09.772 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.772 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.772 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.773 18:46:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.309 00:09:12.309 real 0m26.920s 00:09:12.309 user 1m47.161s 00:09:12.309 sys 0m8.395s 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.309 ************************************ 00:09:12.309 END TEST nvmf_fio_target 00:09:12.309 ************************************ 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.309 ************************************ 00:09:12.309 START TEST nvmf_bdevio 00:09:12.309 ************************************ 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:12.309 * Looking for test storage... 00:09:12.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.309 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.310 --rc genhtml_branch_coverage=1 00:09:12.310 --rc genhtml_function_coverage=1 00:09:12.310 --rc genhtml_legend=1 00:09:12.310 --rc geninfo_all_blocks=1 00:09:12.310 --rc geninfo_unexecuted_blocks=1 00:09:12.310 00:09:12.310 ' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.310 --rc genhtml_branch_coverage=1 00:09:12.310 --rc genhtml_function_coverage=1 00:09:12.310 --rc genhtml_legend=1 00:09:12.310 --rc geninfo_all_blocks=1 00:09:12.310 --rc geninfo_unexecuted_blocks=1 00:09:12.310 00:09:12.310 ' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.310 --rc genhtml_branch_coverage=1 00:09:12.310 --rc genhtml_function_coverage=1 00:09:12.310 --rc genhtml_legend=1 00:09:12.310 --rc geninfo_all_blocks=1 00:09:12.310 --rc geninfo_unexecuted_blocks=1 00:09:12.310 00:09:12.310 ' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.310 --rc genhtml_branch_coverage=1 00:09:12.310 --rc genhtml_function_coverage=1 00:09:12.310 --rc genhtml_legend=1 00:09:12.310 --rc geninfo_all_blocks=1 00:09:12.310 --rc geninfo_unexecuted_blocks=1 00:09:12.310 00:09:12.310 ' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.310 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.881 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.881 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:18.882 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:18.882 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:18.882 Found net devices under 0000:86:00.0: cvl_0_0 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:18.882 Found net devices under 0000:86:00.1: cvl_0_1 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.882 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:09:18.883 00:09:18.883 --- 10.0.0.2 ping statistics --- 00:09:18.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.883 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:09:18.883 00:09:18.883 --- 10.0.0.1 ping statistics --- 00:09:18.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.883 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3538104 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3538104 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3538104 ']' 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 [2024-11-20 18:46:40.367608] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:09:18.883 [2024-11-20 18:46:40.367652] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.883 [2024-11-20 18:46:40.428940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.883 [2024-11-20 18:46:40.471581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.883 [2024-11-20 18:46:40.471618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.883 [2024-11-20 18:46:40.471625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.883 [2024-11-20 18:46:40.471631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.883 [2024-11-20 18:46:40.471636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.883 [2024-11-20 18:46:40.473351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:18.883 [2024-11-20 18:46:40.473458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:18.883 [2024-11-20 18:46:40.473565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.883 [2024-11-20 18:46:40.473566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 [2024-11-20 18:46:40.610983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 Malloc0 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 [2024-11-20 18:46:40.670935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.883 { 00:09:18.883 "params": { 00:09:18.883 "name": "Nvme$subsystem", 00:09:18.883 "trtype": "$TEST_TRANSPORT", 00:09:18.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.883 "adrfam": "ipv4", 00:09:18.883 "trsvcid": "$NVMF_PORT", 00:09:18.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.883 "hdgst": ${hdgst:-false}, 00:09:18.883 "ddgst": ${ddgst:-false} 00:09:18.883 }, 00:09:18.883 "method": "bdev_nvme_attach_controller" 00:09:18.883 } 00:09:18.883 EOF 00:09:18.883 )") 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:18.883 18:46:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.883 "params": { 00:09:18.883 "name": "Nvme1", 00:09:18.883 "trtype": "tcp", 00:09:18.883 "traddr": "10.0.0.2", 00:09:18.883 "adrfam": "ipv4", 00:09:18.883 "trsvcid": "4420", 00:09:18.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.883 "hdgst": false, 00:09:18.883 "ddgst": false 00:09:18.883 }, 00:09:18.883 "method": "bdev_nvme_attach_controller" 00:09:18.883 }' 00:09:18.883 [2024-11-20 18:46:40.724149] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:09:18.883 [2024-11-20 18:46:40.724190] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538238 ] 00:09:18.883 [2024-11-20 18:46:40.801147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:18.883 [2024-11-20 18:46:40.844951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.883 [2024-11-20 18:46:40.845059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.883 [2024-11-20 18:46:40.845059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.883 I/O targets: 00:09:18.883 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:18.883 00:09:18.883 00:09:18.883 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.883 http://cunit.sourceforge.net/ 00:09:18.883 00:09:18.883 00:09:18.883 Suite: bdevio tests on: Nvme1n1 00:09:18.884 Test: blockdev write read block ...passed 00:09:18.884 Test: blockdev write zeroes read block ...passed 00:09:18.884 Test: blockdev write zeroes read no split ...passed 00:09:18.884 Test: blockdev write zeroes read split ...passed 00:09:18.884 Test: blockdev write zeroes read split partial ...passed 00:09:18.884 Test: blockdev reset ...[2024-11-20 18:46:41.115817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:18.884 [2024-11-20 18:46:41.115882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805340 (9): Bad file descriptor 00:09:18.884 [2024-11-20 18:46:41.168490] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:18.884 passed 00:09:18.884 Test: blockdev write read 8 blocks ...passed 00:09:18.884 Test: blockdev write read size > 128k ...passed 00:09:18.884 Test: blockdev write read invalid size ...passed 00:09:19.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:19.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:19.142 Test: blockdev write read max offset ...passed 00:09:19.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:19.142 Test: blockdev writev readv 8 blocks ...passed 00:09:19.142 Test: blockdev writev readv 30 x 1block ...passed 00:09:19.142 Test: blockdev writev readv block ...passed 00:09:19.142 Test: blockdev writev readv size > 128k ...passed 00:09:19.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:19.142 Test: blockdev comparev and writev ...[2024-11-20 18:46:41.339004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.142 [2024-11-20 18:46:41.339049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:19.142 [2024-11-20 18:46:41.339063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.142 [2024-11-20 18:46:41.339071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:19.142 [2024-11-20 18:46:41.339306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.142 [2024-11-20 18:46:41.339317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:19.142 [2024-11-20 18:46:41.339329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.142 [2024-11-20 18:46:41.339336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:19.142 [2024-11-20 18:46:41.339574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.142 [2024-11-20 18:46:41.339584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:19.142 [2024-11-20 18:46:41.339596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.142 [2024-11-20 18:46:41.339602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:19.142 [2024-11-20 18:46:41.339844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.143 [2024-11-20 18:46:41.339854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:19.143 [2024-11-20 18:46:41.339865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.143 [2024-11-20 18:46:41.339873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:19.143 passed 00:09:19.143 Test: blockdev nvme passthru rw ...passed 00:09:19.143 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:46:41.422558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.143 [2024-11-20 18:46:41.422574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:19.143 [2024-11-20 18:46:41.422674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.143 [2024-11-20 18:46:41.422683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:19.143 [2024-11-20 18:46:41.422786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.143 [2024-11-20 18:46:41.422795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:19.143 [2024-11-20 18:46:41.422894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.143 [2024-11-20 18:46:41.422903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:19.143 passed 00:09:19.143 Test: blockdev nvme admin passthru ...passed 00:09:19.401 Test: blockdev copy ...passed 00:09:19.401 00:09:19.401 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.401 suites 1 1 n/a 0 0 00:09:19.401 tests 23 23 23 0 0 00:09:19.401 asserts 152 152 152 0 n/a 00:09:19.401 00:09:19.401 Elapsed time = 0.962 seconds 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.401 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.402 rmmod nvme_tcp 00:09:19.402 rmmod nvme_fabrics 00:09:19.402 rmmod nvme_keyring 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3538104 ']' 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3538104 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3538104 ']' 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3538104 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.402 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3538104 00:09:19.660 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:19.660 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:19.660 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3538104' 00:09:19.660 killing process with pid 3538104 00:09:19.660 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3538104 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3538104 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.661 18:46:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.198 18:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.198 00:09:22.198 real 0m9.872s 00:09:22.198 user 0m9.314s 00:09:22.198 sys 0m4.966s 00:09:22.198 18:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.198 18:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:22.198 ************************************ 00:09:22.198 END TEST nvmf_bdevio 00:09:22.198 ************************************ 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:22.198 00:09:22.198 real 4m34.953s 00:09:22.198 user 10m17.144s 00:09:22.198 sys 1m37.282s 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.198 ************************************ 00:09:22.198 END TEST nvmf_target_core 00:09:22.198 ************************************ 00:09:22.198 18:46:44 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:22.198 18:46:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.198 18:46:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.198 18:46:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.198 ************************************ 00:09:22.198 START TEST nvmf_target_extra 00:09:22.198 ************************************ 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:22.198 * Looking for test storage... 00:09:22.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.198 --rc genhtml_branch_coverage=1 00:09:22.198 --rc genhtml_function_coverage=1 00:09:22.198 --rc genhtml_legend=1 00:09:22.198 --rc geninfo_all_blocks=1 00:09:22.198 --rc geninfo_unexecuted_blocks=1 00:09:22.198 00:09:22.198 ' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.198 --rc genhtml_branch_coverage=1 00:09:22.198 --rc genhtml_function_coverage=1 00:09:22.198 --rc genhtml_legend=1 00:09:22.198 --rc geninfo_all_blocks=1 00:09:22.198 --rc geninfo_unexecuted_blocks=1 00:09:22.198 00:09:22.198 ' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.198 --rc genhtml_branch_coverage=1 00:09:22.198 --rc genhtml_function_coverage=1 00:09:22.198 --rc genhtml_legend=1 00:09:22.198 --rc geninfo_all_blocks=1 00:09:22.198 --rc geninfo_unexecuted_blocks=1 00:09:22.198 00:09:22.198 ' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.198 --rc genhtml_branch_coverage=1 00:09:22.198 --rc genhtml_function_coverage=1 00:09:22.198 --rc genhtml_legend=1 00:09:22.198 --rc geninfo_all_blocks=1 00:09:22.198 --rc geninfo_unexecuted_blocks=1 00:09:22.198 00:09:22.198 ' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.198 18:46:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:22.199 ************************************ 00:09:22.199 START TEST nvmf_example 00:09:22.199 ************************************ 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:22.199 * Looking for test storage... 00:09:22.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.199 --rc genhtml_branch_coverage=1 00:09:22.199 --rc genhtml_function_coverage=1 00:09:22.199 --rc genhtml_legend=1 00:09:22.199 --rc geninfo_all_blocks=1 00:09:22.199 --rc geninfo_unexecuted_blocks=1 00:09:22.199 00:09:22.199 ' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.199 --rc genhtml_branch_coverage=1 00:09:22.199 --rc genhtml_function_coverage=1 00:09:22.199 --rc genhtml_legend=1 00:09:22.199 --rc geninfo_all_blocks=1 00:09:22.199 --rc geninfo_unexecuted_blocks=1 00:09:22.199 00:09:22.199 ' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.199 --rc genhtml_branch_coverage=1 00:09:22.199 --rc genhtml_function_coverage=1 00:09:22.199 --rc genhtml_legend=1 00:09:22.199 --rc geninfo_all_blocks=1 00:09:22.199 --rc geninfo_unexecuted_blocks=1 00:09:22.199 00:09:22.199 ' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.199 --rc genhtml_branch_coverage=1 00:09:22.199 --rc genhtml_function_coverage=1 00:09:22.199 --rc genhtml_legend=1 00:09:22.199 --rc geninfo_all_blocks=1 00:09:22.199 --rc geninfo_unexecuted_blocks=1 00:09:22.199 00:09:22.199 ' 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.199 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:22.458 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.459 18:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.030 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.030 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:29.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:29.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:29.031 Found net devices under 0000:86:00.0: cvl_0_0 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:29.031 Found net devices under 0000:86:00.1: cvl_0_1 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:09:29.031 00:09:29.031 --- 10.0.0.2 ping statistics --- 00:09:29.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.031 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:09:29.031 00:09:29.031 --- 10.0.0.1 ping statistics --- 00:09:29.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.031 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.031 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3542055 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3542055 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3542055 ']' 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.032 18:46:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:29.291 18:46:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:41.503 Initializing NVMe Controllers 00:09:41.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:41.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:41.503 Initialization complete. Launching workers. 00:09:41.503 ======================================================== 00:09:41.503 Latency(us) 00:09:41.503 Device Information : IOPS MiB/s Average min max 00:09:41.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18147.80 70.89 3527.89 686.85 15877.28 00:09:41.503 ======================================================== 00:09:41.503 Total : 18147.80 70.89 3527.89 686.85 15877.28 00:09:41.503 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.503 rmmod nvme_tcp 00:09:41.503 rmmod nvme_fabrics 00:09:41.503 rmmod nvme_keyring 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3542055 ']' 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3542055 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3542055 ']' 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3542055 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.503 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3542055 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3542055' 00:09:41.503 killing process with pid 3542055 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3542055 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3542055 00:09:41.503 nvmf threads initialize successfully 00:09:41.503 bdev subsystem init successfully 00:09:41.503 created a nvmf target service 00:09:41.503 create targets's poll groups done 00:09:41.503 all subsystems of target started 00:09:41.503 nvmf target is running 00:09:41.503 all subsystems of target stopped 00:09:41.503 destroy targets's poll groups done 00:09:41.503 destroyed the nvmf target service 00:09:41.503 bdev subsystem finish successfully 00:09:41.503 nvmf threads destroy successfully 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.503 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.504 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:42.072 00:09:42.072 real 0m19.999s 00:09:42.072 user 0m46.620s 00:09:42.072 sys 0m6.160s 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:42.072 ************************************ 00:09:42.072 END TEST nvmf_example 00:09:42.072 ************************************ 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.072 18:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:42.333 ************************************ 00:09:42.333 START TEST nvmf_filesystem 00:09:42.333 ************************************ 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:42.333 * Looking for test storage... 00:09:42.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.333 --rc genhtml_branch_coverage=1 00:09:42.333 --rc genhtml_function_coverage=1 00:09:42.333 --rc genhtml_legend=1 00:09:42.333 --rc geninfo_all_blocks=1 00:09:42.333 --rc geninfo_unexecuted_blocks=1 00:09:42.333 00:09:42.333 ' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.333 --rc genhtml_branch_coverage=1 00:09:42.333 --rc genhtml_function_coverage=1 00:09:42.333 --rc genhtml_legend=1 00:09:42.333 --rc geninfo_all_blocks=1 00:09:42.333 --rc geninfo_unexecuted_blocks=1 00:09:42.333 00:09:42.333 ' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.333 --rc genhtml_branch_coverage=1 00:09:42.333 --rc genhtml_function_coverage=1 00:09:42.333 --rc genhtml_legend=1 00:09:42.333 --rc geninfo_all_blocks=1 00:09:42.333 --rc geninfo_unexecuted_blocks=1 00:09:42.333 00:09:42.333 ' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.333 --rc genhtml_branch_coverage=1 00:09:42.333 --rc genhtml_function_coverage=1 00:09:42.333 --rc genhtml_legend=1 00:09:42.333 --rc geninfo_all_blocks=1 00:09:42.333 --rc geninfo_unexecuted_blocks=1 00:09:42.333 00:09:42.333 ' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:42.333 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:42.334 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:42.334 #define SPDK_CONFIG_H 00:09:42.334 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:42.334 #define SPDK_CONFIG_APPS 1 00:09:42.334 #define SPDK_CONFIG_ARCH native 00:09:42.334 #undef SPDK_CONFIG_ASAN 00:09:42.334 #undef SPDK_CONFIG_AVAHI 00:09:42.334 #undef SPDK_CONFIG_CET 00:09:42.334 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:42.334 #define SPDK_CONFIG_COVERAGE 1 00:09:42.334 #define SPDK_CONFIG_CROSS_PREFIX 00:09:42.334 #undef SPDK_CONFIG_CRYPTO 00:09:42.334 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:42.334 #undef SPDK_CONFIG_CUSTOMOCF 00:09:42.334 #undef SPDK_CONFIG_DAOS 00:09:42.334 #define SPDK_CONFIG_DAOS_DIR 00:09:42.334 #define SPDK_CONFIG_DEBUG 1 00:09:42.334 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:42.334 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:42.334 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:42.334 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:42.334 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:42.335 #undef SPDK_CONFIG_DPDK_UADK 00:09:42.335 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:42.335 #define SPDK_CONFIG_EXAMPLES 1 00:09:42.335 #undef SPDK_CONFIG_FC 00:09:42.335 #define SPDK_CONFIG_FC_PATH 00:09:42.335 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:42.335 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:42.335 #define SPDK_CONFIG_FSDEV 1 00:09:42.335 #undef SPDK_CONFIG_FUSE 00:09:42.335 #undef SPDK_CONFIG_FUZZER 00:09:42.335 #define SPDK_CONFIG_FUZZER_LIB 00:09:42.335 #undef SPDK_CONFIG_GOLANG 00:09:42.335 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:42.335 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:42.335 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:42.335 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:42.335 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:42.335 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:42.335 #undef SPDK_CONFIG_HAVE_LZ4 00:09:42.335 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:42.335 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:42.335 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:42.335 #define SPDK_CONFIG_IDXD 1 00:09:42.335 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:42.335 #undef SPDK_CONFIG_IPSEC_MB 00:09:42.335 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:42.335 #define SPDK_CONFIG_ISAL 1 00:09:42.335 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:42.335 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:42.335 #define SPDK_CONFIG_LIBDIR 00:09:42.335 #undef SPDK_CONFIG_LTO 00:09:42.335 #define SPDK_CONFIG_MAX_LCORES 128 00:09:42.335 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:42.335 #define SPDK_CONFIG_NVME_CUSE 1 00:09:42.335 #undef SPDK_CONFIG_OCF 00:09:42.335 #define SPDK_CONFIG_OCF_PATH 00:09:42.335 #define SPDK_CONFIG_OPENSSL_PATH 00:09:42.335 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:42.335 #define SPDK_CONFIG_PGO_DIR 00:09:42.335 #undef SPDK_CONFIG_PGO_USE 00:09:42.335 #define SPDK_CONFIG_PREFIX /usr/local 00:09:42.335 #undef SPDK_CONFIG_RAID5F 00:09:42.335 #undef SPDK_CONFIG_RBD 00:09:42.335 #define SPDK_CONFIG_RDMA 1 00:09:42.335 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:42.335 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:42.335 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:42.335 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:42.335 #define SPDK_CONFIG_SHARED 1 00:09:42.335 #undef SPDK_CONFIG_SMA 00:09:42.335 #define SPDK_CONFIG_TESTS 1 00:09:42.335 #undef SPDK_CONFIG_TSAN 00:09:42.335 #define SPDK_CONFIG_UBLK 1 00:09:42.335 #define SPDK_CONFIG_UBSAN 1 00:09:42.335 #undef SPDK_CONFIG_UNIT_TESTS 00:09:42.335 #undef SPDK_CONFIG_URING 00:09:42.335 #define SPDK_CONFIG_URING_PATH 00:09:42.335 #undef SPDK_CONFIG_URING_ZNS 00:09:42.335 #undef SPDK_CONFIG_USDT 00:09:42.335 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:42.335 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:42.335 #define SPDK_CONFIG_VFIO_USER 1 00:09:42.335 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:42.335 #define SPDK_CONFIG_VHOST 1 00:09:42.335 #define SPDK_CONFIG_VIRTIO 1 00:09:42.335 #undef SPDK_CONFIG_VTUNE 00:09:42.335 #define SPDK_CONFIG_VTUNE_DIR 00:09:42.335 #define SPDK_CONFIG_WERROR 1 00:09:42.335 #define SPDK_CONFIG_WPDK_DIR 00:09:42.335 #undef SPDK_CONFIG_XNVME 00:09:42.335 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:42.335 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:42.597 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:42.598 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:42.599 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3544459 ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3544459 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.jLwWsS 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.jLwWsS/tests/target /tmp/spdk.jLwWsS 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189083430912 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6880542720 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970618368 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981161472 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=827392 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:42.600 * Looking for test storage... 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189083430912 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9095135232 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:42.600 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.601 --rc genhtml_branch_coverage=1 00:09:42.601 --rc genhtml_function_coverage=1 00:09:42.601 --rc genhtml_legend=1 00:09:42.601 --rc geninfo_all_blocks=1 00:09:42.601 --rc geninfo_unexecuted_blocks=1 00:09:42.601 00:09:42.601 ' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.601 --rc genhtml_branch_coverage=1 00:09:42.601 --rc genhtml_function_coverage=1 00:09:42.601 --rc genhtml_legend=1 00:09:42.601 --rc geninfo_all_blocks=1 00:09:42.601 --rc geninfo_unexecuted_blocks=1 00:09:42.601 00:09:42.601 ' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.601 --rc genhtml_branch_coverage=1 00:09:42.601 --rc genhtml_function_coverage=1 00:09:42.601 --rc genhtml_legend=1 00:09:42.601 --rc geninfo_all_blocks=1 00:09:42.601 --rc geninfo_unexecuted_blocks=1 00:09:42.601 00:09:42.601 ' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.601 --rc genhtml_branch_coverage=1 00:09:42.601 --rc genhtml_function_coverage=1 00:09:42.601 --rc genhtml_legend=1 00:09:42.601 --rc geninfo_all_blocks=1 00:09:42.601 --rc geninfo_unexecuted_blocks=1 00:09:42.601 00:09:42.601 ' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:42.601 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.602 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:49.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:49.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:49.174 Found net devices under 0000:86:00.0: cvl_0_0 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:49.174 Found net devices under 0000:86:00.1: cvl_0_1 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.174 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:09:49.175 00:09:49.175 --- 10.0.0.2 ping statistics --- 00:09:49.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.175 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:09:49.175 00:09:49.175 --- 10.0.0.1 ping statistics --- 00:09:49.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.175 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 ************************************ 00:09:49.175 START TEST nvmf_filesystem_no_in_capsule 00:09:49.175 ************************************ 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3547498 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3547498 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3547498 ']' 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.175 18:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 [2024-11-20 18:47:11.003985] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:09:49.175 [2024-11-20 18:47:11.004037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.175 [2024-11-20 18:47:11.071173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.175 [2024-11-20 18:47:11.116345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.175 [2024-11-20 18:47:11.116378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.175 [2024-11-20 18:47:11.116385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.175 [2024-11-20 18:47:11.116391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.175 [2024-11-20 18:47:11.116396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.175 [2024-11-20 18:47:11.117930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.175 [2024-11-20 18:47:11.117968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.175 [2024-11-20 18:47:11.118076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.175 [2024-11-20 18:47:11.118077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 [2024-11-20 18:47:11.259550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 Malloc1 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 [2024-11-20 18:47:11.424769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:49.175 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.176 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.176 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.176 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:49.176 { 00:09:49.176 "name": "Malloc1", 00:09:49.176 "aliases": [ 00:09:49.176 "c68079f6-52d5-489d-9554-1679af7db554" 00:09:49.176 ], 00:09:49.176 "product_name": "Malloc disk", 00:09:49.176 "block_size": 512, 00:09:49.176 "num_blocks": 1048576, 00:09:49.176 "uuid": "c68079f6-52d5-489d-9554-1679af7db554", 00:09:49.176 "assigned_rate_limits": { 00:09:49.176 "rw_ios_per_sec": 0, 00:09:49.176 "rw_mbytes_per_sec": 0, 00:09:49.176 "r_mbytes_per_sec": 0, 00:09:49.176 "w_mbytes_per_sec": 0 00:09:49.176 }, 00:09:49.176 "claimed": true, 00:09:49.176 "claim_type": "exclusive_write", 00:09:49.176 "zoned": false, 00:09:49.176 "supported_io_types": { 00:09:49.176 "read": true, 00:09:49.176 "write": true, 00:09:49.176 "unmap": true, 00:09:49.176 "flush": true, 00:09:49.176 "reset": true, 00:09:49.176 "nvme_admin": false, 00:09:49.176 "nvme_io": false, 00:09:49.176 "nvme_io_md": false, 00:09:49.176 "write_zeroes": true, 00:09:49.176 "zcopy": true, 00:09:49.176 "get_zone_info": false, 00:09:49.176 "zone_management": false, 00:09:49.176 "zone_append": false, 00:09:49.176 "compare": false, 00:09:49.176 "compare_and_write": false, 00:09:49.176 "abort": true, 00:09:49.176 "seek_hole": false, 00:09:49.176 "seek_data": false, 00:09:49.176 "copy": true, 00:09:49.176 "nvme_iov_md": false 00:09:49.176 }, 00:09:49.176 "memory_domains": [ 00:09:49.176 { 00:09:49.176 "dma_device_id": "system", 00:09:49.176 "dma_device_type": 1 00:09:49.176 }, 00:09:49.176 { 00:09:49.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.176 "dma_device_type": 2 00:09:49.176 } 00:09:49.176 ], 00:09:49.176 "driver_specific": {} 00:09:49.176 } 00:09:49.176 ]' 00:09:49.176 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:49.434 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:49.434 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:49.434 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:49.434 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:49.434 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:49.434 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:49.434 18:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:50.810 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:50.810 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:50.810 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:50.810 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:50.810 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:52.713 18:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:52.972 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:53.540 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:54.476 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:54.477 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:54.477 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:54.477 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.477 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:54.736 ************************************ 00:09:54.736 START TEST filesystem_ext4 00:09:54.736 ************************************ 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:54.736 18:47:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:54.736 mke2fs 1.47.0 (5-Feb-2023) 00:09:54.736 Discarding device blocks: 0/522240 done 00:09:54.736 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:54.736 Filesystem UUID: bcb0a660-8c08-424d-a04b-9636a508b891 00:09:54.736 Superblock backups stored on blocks: 00:09:54.736 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:54.736 00:09:54.736 Allocating group tables: 0/64 done 00:09:54.736 Writing inode tables: 0/64 done 00:09:54.736 Creating journal (8192 blocks): done 00:09:54.736 Writing superblocks and filesystem accounting information: 0/64 done 00:09:54.736 00:09:54.736 18:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:54.736 18:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:01.303 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:01.303 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3547498 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:01.304 00:10:01.304 real 0m5.672s 00:10:01.304 user 0m0.016s 00:10:01.304 sys 0m0.083s 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:01.304 ************************************ 00:10:01.304 END TEST filesystem_ext4 00:10:01.304 ************************************ 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.304 ************************************ 00:10:01.304 START TEST filesystem_btrfs 00:10:01.304 ************************************ 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:01.304 btrfs-progs v6.8.1 00:10:01.304 See https://btrfs.readthedocs.io for more information. 00:10:01.304 00:10:01.304 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:01.304 NOTE: several default settings have changed in version 5.15, please make sure 00:10:01.304 this does not affect your deployments: 00:10:01.304 - DUP for metadata (-m dup) 00:10:01.304 - enabled no-holes (-O no-holes) 00:10:01.304 - enabled free-space-tree (-R free-space-tree) 00:10:01.304 00:10:01.304 Label: (null) 00:10:01.304 UUID: a72dbd63-28df-4528-bc67-e9d96f8dc49a 00:10:01.304 Node size: 16384 00:10:01.304 Sector size: 4096 (CPU page size: 4096) 00:10:01.304 Filesystem size: 510.00MiB 00:10:01.304 Block group profiles: 00:10:01.304 Data: single 8.00MiB 00:10:01.304 Metadata: DUP 32.00MiB 00:10:01.304 System: DUP 8.00MiB 00:10:01.304 SSD detected: yes 00:10:01.304 Zoned device: no 00:10:01.304 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:01.304 Checksum: crc32c 00:10:01.304 Number of devices: 1 00:10:01.304 Devices: 00:10:01.304 ID SIZE PATH 00:10:01.304 1 510.00MiB /dev/nvme0n1p1 00:10:01.304 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:01.304 18:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3547498 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:01.304 00:10:01.304 real 0m0.746s 00:10:01.304 user 0m0.025s 00:10:01.304 sys 0m0.116s 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:01.304 ************************************ 00:10:01.304 END TEST filesystem_btrfs 00:10:01.304 ************************************ 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.304 ************************************ 00:10:01.304 START TEST filesystem_xfs 00:10:01.304 ************************************ 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.304 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:01.305 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:01.305 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.305 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:01.305 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:01.305 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:01.305 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:01.305 18:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:01.305 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:01.305 = sectsz=512 attr=2, projid32bit=1 00:10:01.305 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:01.305 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:01.305 data = bsize=4096 blocks=130560, imaxpct=25 00:10:01.305 = sunit=0 swidth=0 blks 00:10:01.305 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:01.305 log =internal log bsize=4096 blocks=16384, version=2 00:10:01.305 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:01.305 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:01.872 Discarding blocks...Done. 00:10:01.872 18:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:01.872 18:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3547498 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.405 00:10:04.405 real 0m2.985s 00:10:04.405 user 0m0.025s 00:10:04.405 sys 0m0.073s 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:04.405 ************************************ 00:10:04.405 END TEST filesystem_xfs 00:10:04.405 ************************************ 00:10:04.405 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:04.406 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:04.406 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.664 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3547498 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3547498 ']' 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3547498 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3547498 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3547498' 00:10:04.665 killing process with pid 3547498 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3547498 00:10:04.665 18:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3547498 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:04.924 00:10:04.924 real 0m16.216s 00:10:04.924 user 1m3.810s 00:10:04.924 sys 0m1.367s 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.924 ************************************ 00:10:04.924 END TEST nvmf_filesystem_no_in_capsule 00:10:04.924 ************************************ 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.924 ************************************ 00:10:04.924 START TEST nvmf_filesystem_in_capsule 00:10:04.924 ************************************ 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3550484 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3550484 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3550484 ']' 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.924 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.184 [2024-11-20 18:47:27.296460] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:10:05.184 [2024-11-20 18:47:27.296500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.184 [2024-11-20 18:47:27.371335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.184 [2024-11-20 18:47:27.413789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.184 [2024-11-20 18:47:27.413824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.184 [2024-11-20 18:47:27.413831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.184 [2024-11-20 18:47:27.413837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.184 [2024-11-20 18:47:27.413842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.184 [2024-11-20 18:47:27.415417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.184 [2024-11-20 18:47:27.415527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.184 [2024-11-20 18:47:27.415658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.184 [2024-11-20 18:47:27.415660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 [2024-11-20 18:47:27.553783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 Malloc1 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 [2024-11-20 18:47:27.704369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:05.443 { 00:10:05.443 "name": "Malloc1", 00:10:05.443 "aliases": [ 00:10:05.443 "07194422-d52c-416f-9995-996011e9a22d" 00:10:05.443 ], 00:10:05.443 "product_name": "Malloc disk", 00:10:05.443 "block_size": 512, 00:10:05.443 "num_blocks": 1048576, 00:10:05.443 "uuid": "07194422-d52c-416f-9995-996011e9a22d", 00:10:05.443 "assigned_rate_limits": { 00:10:05.443 "rw_ios_per_sec": 0, 00:10:05.443 "rw_mbytes_per_sec": 0, 00:10:05.443 "r_mbytes_per_sec": 0, 00:10:05.443 "w_mbytes_per_sec": 0 00:10:05.443 }, 00:10:05.443 "claimed": true, 00:10:05.443 "claim_type": "exclusive_write", 00:10:05.443 "zoned": false, 00:10:05.443 "supported_io_types": { 00:10:05.443 "read": true, 00:10:05.443 "write": true, 00:10:05.443 "unmap": true, 00:10:05.443 "flush": true, 00:10:05.443 "reset": true, 00:10:05.443 "nvme_admin": false, 00:10:05.443 "nvme_io": false, 00:10:05.443 "nvme_io_md": false, 00:10:05.443 "write_zeroes": true, 00:10:05.443 "zcopy": true, 00:10:05.443 "get_zone_info": false, 00:10:05.443 "zone_management": false, 00:10:05.443 "zone_append": false, 00:10:05.443 "compare": false, 00:10:05.443 "compare_and_write": false, 00:10:05.443 "abort": true, 00:10:05.443 "seek_hole": false, 00:10:05.443 "seek_data": false, 00:10:05.443 "copy": true, 00:10:05.443 "nvme_iov_md": false 00:10:05.443 }, 00:10:05.443 "memory_domains": [ 00:10:05.443 { 00:10:05.443 "dma_device_id": "system", 00:10:05.443 "dma_device_type": 1 00:10:05.443 }, 00:10:05.443 { 00:10:05.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.443 "dma_device_type": 2 00:10:05.443 } 00:10:05.443 ], 00:10:05.443 "driver_specific": {} 00:10:05.443 } 00:10:05.443 ]' 00:10:05.443 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:05.702 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:05.702 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:05.702 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:05.702 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:05.702 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:05.702 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:05.702 18:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.638 18:47:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.638 18:47:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:06.638 18:47:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.638 18:47:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:06.638 18:47:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:09.173 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:09.174 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:09.174 18:47:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:09.174 18:47:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:09.433 18:47:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.370 ************************************ 00:10:10.370 START TEST filesystem_in_capsule_ext4 00:10:10.370 ************************************ 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:10.370 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:10.370 mke2fs 1.47.0 (5-Feb-2023) 00:10:10.630 Discarding device blocks: 0/522240 done 00:10:10.630 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:10.630 Filesystem UUID: 4e5812f2-93ba-4be0-8d5b-aaf8e9cbea5c 00:10:10.630 Superblock backups stored on blocks: 00:10:10.630 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:10.630 00:10:10.630 Allocating group tables: 0/64 done 00:10:10.630 Writing inode tables: 0/64 done 00:10:10.630 Creating journal (8192 blocks): done 00:10:10.630 Writing superblocks and filesystem accounting information: 0/64 done 00:10:10.630 00:10:10.630 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:10.630 18:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3550484 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.192 00:10:17.192 real 0m6.039s 00:10:17.192 user 0m0.019s 00:10:17.192 sys 0m0.075s 00:10:17.192 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:17.193 ************************************ 00:10:17.193 END TEST filesystem_in_capsule_ext4 00:10:17.193 ************************************ 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.193 ************************************ 00:10:17.193 START TEST filesystem_in_capsule_btrfs 00:10:17.193 ************************************ 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:17.193 18:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:17.193 btrfs-progs v6.8.1 00:10:17.193 See https://btrfs.readthedocs.io for more information. 00:10:17.193 00:10:17.193 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:17.193 NOTE: several default settings have changed in version 5.15, please make sure 00:10:17.193 this does not affect your deployments: 00:10:17.193 - DUP for metadata (-m dup) 00:10:17.193 - enabled no-holes (-O no-holes) 00:10:17.193 - enabled free-space-tree (-R free-space-tree) 00:10:17.193 00:10:17.193 Label: (null) 00:10:17.193 UUID: 4feeb22a-1dab-4c23-9527-17a222ddaeab 00:10:17.193 Node size: 16384 00:10:17.193 Sector size: 4096 (CPU page size: 4096) 00:10:17.193 Filesystem size: 510.00MiB 00:10:17.193 Block group profiles: 00:10:17.193 Data: single 8.00MiB 00:10:17.193 Metadata: DUP 32.00MiB 00:10:17.193 System: DUP 8.00MiB 00:10:17.193 SSD detected: yes 00:10:17.193 Zoned device: no 00:10:17.193 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:17.193 Checksum: crc32c 00:10:17.193 Number of devices: 1 00:10:17.193 Devices: 00:10:17.193 ID SIZE PATH 00:10:17.193 1 510.00MiB /dev/nvme0n1p1 00:10:17.193 00:10:17.193 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:17.193 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3550484 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.452 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.452 00:10:17.452 real 0m0.869s 00:10:17.452 user 0m0.026s 00:10:17.452 sys 0m0.118s 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:17.453 ************************************ 00:10:17.453 END TEST filesystem_in_capsule_btrfs 00:10:17.453 ************************************ 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.453 ************************************ 00:10:17.453 START TEST filesystem_in_capsule_xfs 00:10:17.453 ************************************ 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:17.453 18:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:17.711 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:17.711 = sectsz=512 attr=2, projid32bit=1 00:10:17.711 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:17.711 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:17.711 data = bsize=4096 blocks=130560, imaxpct=25 00:10:17.711 = sunit=0 swidth=0 blks 00:10:17.711 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:17.711 log =internal log bsize=4096 blocks=16384, version=2 00:10:17.711 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:17.711 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:18.738 Discarding blocks...Done. 00:10:18.738 18:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:18.738 18:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3550484 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:20.673 00:10:20.673 real 0m2.878s 00:10:20.673 user 0m0.023s 00:10:20.673 sys 0m0.073s 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 ************************************ 00:10:20.673 END TEST filesystem_in_capsule_xfs 00:10:20.673 ************************************ 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3550484 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3550484 ']' 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3550484 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550484 00:10:20.673 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.674 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.674 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550484' 00:10:20.674 killing process with pid 3550484 00:10:20.674 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3550484 00:10:20.674 18:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3550484 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:20.933 00:10:20.933 real 0m15.957s 00:10:20.933 user 1m2.692s 00:10:20.933 sys 0m1.405s 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.933 ************************************ 00:10:20.933 END TEST nvmf_filesystem_in_capsule 00:10:20.933 ************************************ 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.933 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.933 rmmod nvme_tcp 00:10:21.193 rmmod nvme_fabrics 00:10:21.193 rmmod nvme_keyring 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.193 18:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.097 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:23.097 00:10:23.097 real 0m40.963s 00:10:23.097 user 2m8.580s 00:10:23.097 sys 0m7.504s 00:10:23.097 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.097 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.097 ************************************ 00:10:23.097 END TEST nvmf_filesystem 00:10:23.097 ************************************ 00:10:23.097 18:47:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:23.097 18:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.097 18:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.097 18:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:23.357 ************************************ 00:10:23.357 START TEST nvmf_target_discovery 00:10:23.357 ************************************ 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:23.357 * Looking for test storage... 00:10:23.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.357 --rc genhtml_branch_coverage=1 00:10:23.357 --rc genhtml_function_coverage=1 00:10:23.357 --rc genhtml_legend=1 00:10:23.357 --rc geninfo_all_blocks=1 00:10:23.357 --rc geninfo_unexecuted_blocks=1 00:10:23.357 00:10:23.357 ' 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.357 --rc genhtml_branch_coverage=1 00:10:23.357 --rc genhtml_function_coverage=1 00:10:23.357 --rc genhtml_legend=1 00:10:23.357 --rc geninfo_all_blocks=1 00:10:23.357 --rc geninfo_unexecuted_blocks=1 00:10:23.357 00:10:23.357 ' 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.357 --rc genhtml_branch_coverage=1 00:10:23.357 --rc genhtml_function_coverage=1 00:10:23.357 --rc genhtml_legend=1 00:10:23.357 --rc geninfo_all_blocks=1 00:10:23.357 --rc geninfo_unexecuted_blocks=1 00:10:23.357 00:10:23.357 ' 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.357 --rc genhtml_branch_coverage=1 00:10:23.357 --rc genhtml_function_coverage=1 00:10:23.357 --rc genhtml_legend=1 00:10:23.357 --rc geninfo_all_blocks=1 00:10:23.357 --rc geninfo_unexecuted_blocks=1 00:10:23.357 00:10:23.357 ' 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:23.357 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.358 18:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:29.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:29.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.932 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:29.933 Found net devices under 0000:86:00.0: cvl_0_0 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:29.933 Found net devices under 0000:86:00.1: cvl_0_1 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:10:29.933 00:10:29.933 --- 10.0.0.2 ping statistics --- 00:10:29.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.933 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:29.933 00:10:29.933 --- 10.0.0.1 ping statistics --- 00:10:29.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.933 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3556774 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3556774 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3556774 ']' 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 [2024-11-20 18:47:51.737801] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:10:29.933 [2024-11-20 18:47:51.737840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.933 [2024-11-20 18:47:51.795170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.933 [2024-11-20 18:47:51.838284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.933 [2024-11-20 18:47:51.838314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.933 [2024-11-20 18:47:51.838321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.933 [2024-11-20 18:47:51.838326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.933 [2024-11-20 18:47:51.838332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.933 [2024-11-20 18:47:51.839841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.933 [2024-11-20 18:47:51.839949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.933 [2024-11-20 18:47:51.840033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.933 [2024-11-20 18:47:51.840034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.933 [2024-11-20 18:47:51.985989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.933 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:29.934 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.934 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:29.934 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 Null1 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 [2024-11-20 18:47:52.040388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 Null2 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 Null3 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 Null4 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.934 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:30.194 00:10:30.194 Discovery Log Number of Records 6, Generation counter 6 00:10:30.194 =====Discovery Log Entry 0====== 00:10:30.194 trtype: tcp 00:10:30.194 adrfam: ipv4 00:10:30.194 subtype: current discovery subsystem 00:10:30.194 treq: not required 00:10:30.194 portid: 0 00:10:30.194 trsvcid: 4420 00:10:30.194 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:30.194 traddr: 10.0.0.2 00:10:30.194 eflags: explicit discovery connections, duplicate discovery information 00:10:30.194 sectype: none 00:10:30.194 =====Discovery Log Entry 1====== 00:10:30.194 trtype: tcp 00:10:30.194 adrfam: ipv4 00:10:30.194 subtype: nvme subsystem 00:10:30.194 treq: not required 00:10:30.194 portid: 0 00:10:30.194 trsvcid: 4420 00:10:30.194 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:30.194 traddr: 10.0.0.2 00:10:30.194 eflags: none 00:10:30.194 sectype: none 00:10:30.194 =====Discovery Log Entry 2====== 00:10:30.194 trtype: tcp 00:10:30.194 adrfam: ipv4 00:10:30.194 subtype: nvme subsystem 00:10:30.194 treq: not required 00:10:30.194 portid: 0 00:10:30.194 trsvcid: 4420 00:10:30.194 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:30.194 traddr: 10.0.0.2 00:10:30.194 eflags: none 00:10:30.194 sectype: none 00:10:30.194 =====Discovery Log Entry 3====== 00:10:30.194 trtype: tcp 00:10:30.194 adrfam: ipv4 00:10:30.194 subtype: nvme subsystem 00:10:30.194 treq: not required 00:10:30.194 portid: 0 00:10:30.194 trsvcid: 4420 00:10:30.194 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:30.194 traddr: 10.0.0.2 00:10:30.194 eflags: none 00:10:30.194 sectype: none 00:10:30.194 =====Discovery Log Entry 4====== 00:10:30.194 trtype: tcp 00:10:30.194 adrfam: ipv4 00:10:30.194 subtype: nvme subsystem 00:10:30.194 treq: not required 00:10:30.194 portid: 0 00:10:30.194 trsvcid: 4420 00:10:30.194 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:30.194 traddr: 10.0.0.2 00:10:30.194 eflags: none 00:10:30.194 sectype: none 00:10:30.194 =====Discovery Log Entry 5====== 00:10:30.194 trtype: tcp 00:10:30.194 adrfam: ipv4 00:10:30.194 subtype: discovery subsystem referral 00:10:30.194 treq: not required 00:10:30.194 portid: 0 00:10:30.194 trsvcid: 4430 00:10:30.194 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:30.194 traddr: 10.0.0.2 00:10:30.194 eflags: none 00:10:30.194 sectype: none 00:10:30.194 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:30.194 Perform nvmf subsystem discovery via RPC 00:10:30.194 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:30.194 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.194 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.194 [ 00:10:30.194 { 00:10:30.194 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:30.194 "subtype": "Discovery", 00:10:30.194 "listen_addresses": [ 00:10:30.194 { 00:10:30.194 "trtype": "TCP", 00:10:30.194 "adrfam": "IPv4", 00:10:30.194 "traddr": "10.0.0.2", 00:10:30.194 "trsvcid": "4420" 00:10:30.194 } 00:10:30.194 ], 00:10:30.194 "allow_any_host": true, 00:10:30.194 "hosts": [] 00:10:30.194 }, 00:10:30.194 { 00:10:30.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.194 "subtype": "NVMe", 00:10:30.194 "listen_addresses": [ 00:10:30.194 { 00:10:30.194 "trtype": "TCP", 00:10:30.194 "adrfam": "IPv4", 00:10:30.194 "traddr": "10.0.0.2", 00:10:30.194 "trsvcid": "4420" 00:10:30.194 } 00:10:30.194 ], 00:10:30.194 "allow_any_host": true, 00:10:30.194 "hosts": [], 00:10:30.194 "serial_number": "SPDK00000000000001", 00:10:30.194 "model_number": "SPDK bdev Controller", 00:10:30.194 "max_namespaces": 32, 00:10:30.194 "min_cntlid": 1, 00:10:30.194 "max_cntlid": 65519, 00:10:30.194 "namespaces": [ 00:10:30.194 { 00:10:30.194 "nsid": 1, 00:10:30.194 "bdev_name": "Null1", 00:10:30.194 "name": "Null1", 00:10:30.194 "nguid": "914F08A8B909479C9F519EDB16768EA1", 00:10:30.194 "uuid": "914f08a8-b909-479c-9f51-9edb16768ea1" 00:10:30.194 } 00:10:30.194 ] 00:10:30.194 }, 00:10:30.194 { 00:10:30.194 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:30.194 "subtype": "NVMe", 00:10:30.194 "listen_addresses": [ 00:10:30.194 { 00:10:30.194 "trtype": "TCP", 00:10:30.194 "adrfam": "IPv4", 00:10:30.194 "traddr": "10.0.0.2", 00:10:30.194 "trsvcid": "4420" 00:10:30.194 } 00:10:30.194 ], 00:10:30.194 "allow_any_host": true, 00:10:30.194 "hosts": [], 00:10:30.194 "serial_number": "SPDK00000000000002", 00:10:30.194 "model_number": "SPDK bdev Controller", 00:10:30.194 "max_namespaces": 32, 00:10:30.194 "min_cntlid": 1, 00:10:30.194 "max_cntlid": 65519, 00:10:30.194 "namespaces": [ 00:10:30.194 { 00:10:30.194 "nsid": 1, 00:10:30.194 "bdev_name": "Null2", 00:10:30.194 "name": "Null2", 00:10:30.194 "nguid": "A719D292AFAA448BB33C2AF2228248E1", 00:10:30.194 "uuid": "a719d292-afaa-448b-b33c-2af2228248e1" 00:10:30.194 } 00:10:30.194 ] 00:10:30.194 }, 00:10:30.194 { 00:10:30.194 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:30.194 "subtype": "NVMe", 00:10:30.194 "listen_addresses": [ 00:10:30.194 { 00:10:30.194 "trtype": "TCP", 00:10:30.194 "adrfam": "IPv4", 00:10:30.194 "traddr": "10.0.0.2", 00:10:30.194 "trsvcid": "4420" 00:10:30.194 } 00:10:30.194 ], 00:10:30.194 "allow_any_host": true, 00:10:30.194 "hosts": [], 00:10:30.194 "serial_number": "SPDK00000000000003", 00:10:30.194 "model_number": "SPDK bdev Controller", 00:10:30.194 "max_namespaces": 32, 00:10:30.194 "min_cntlid": 1, 00:10:30.194 "max_cntlid": 65519, 00:10:30.194 "namespaces": [ 00:10:30.194 { 00:10:30.194 "nsid": 1, 00:10:30.194 "bdev_name": "Null3", 00:10:30.194 "name": "Null3", 00:10:30.194 "nguid": "B99376C5FF064E74994D5C2E1A52B9F1", 00:10:30.194 "uuid": "b99376c5-ff06-4e74-994d-5c2e1a52b9f1" 00:10:30.194 } 00:10:30.194 ] 00:10:30.194 }, 00:10:30.194 { 00:10:30.194 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:30.194 "subtype": "NVMe", 00:10:30.194 "listen_addresses": [ 00:10:30.194 { 00:10:30.194 "trtype": "TCP", 00:10:30.194 "adrfam": "IPv4", 00:10:30.194 "traddr": "10.0.0.2", 00:10:30.194 "trsvcid": "4420" 00:10:30.194 } 00:10:30.194 ], 00:10:30.194 "allow_any_host": true, 00:10:30.194 "hosts": [], 00:10:30.194 "serial_number": "SPDK00000000000004", 00:10:30.194 "model_number": "SPDK bdev Controller", 00:10:30.194 "max_namespaces": 32, 00:10:30.194 "min_cntlid": 1, 00:10:30.194 "max_cntlid": 65519, 00:10:30.194 "namespaces": [ 00:10:30.194 { 00:10:30.194 "nsid": 1, 00:10:30.194 "bdev_name": "Null4", 00:10:30.194 "name": "Null4", 00:10:30.194 "nguid": "A11DED5454594D60814EB95598CEE229", 00:10:30.194 "uuid": "a11ded54-5459-4d60-814e-b95598cee229" 00:10:30.194 } 00:10:30.194 ] 00:10:30.194 } 00:10:30.194 ] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.195 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.195 rmmod nvme_tcp 00:10:30.195 rmmod nvme_fabrics 00:10:30.455 rmmod nvme_keyring 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3556774 ']' 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3556774 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3556774 ']' 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3556774 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3556774 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3556774' 00:10:30.455 killing process with pid 3556774 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3556774 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3556774 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.455 18:47:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.991 00:10:32.991 real 0m9.372s 00:10:32.991 user 0m5.564s 00:10:32.991 sys 0m4.828s 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 ************************************ 00:10:32.991 END TEST nvmf_target_discovery 00:10:32.991 ************************************ 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 ************************************ 00:10:32.991 START TEST nvmf_referrals 00:10:32.991 ************************************ 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:32.991 * Looking for test storage... 00:10:32.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.991 18:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.991 --rc genhtml_branch_coverage=1 00:10:32.991 --rc genhtml_function_coverage=1 00:10:32.991 --rc genhtml_legend=1 00:10:32.991 --rc geninfo_all_blocks=1 00:10:32.991 --rc geninfo_unexecuted_blocks=1 00:10:32.991 00:10:32.991 ' 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.991 --rc genhtml_branch_coverage=1 00:10:32.991 --rc genhtml_function_coverage=1 00:10:32.991 --rc genhtml_legend=1 00:10:32.991 --rc geninfo_all_blocks=1 00:10:32.991 --rc geninfo_unexecuted_blocks=1 00:10:32.991 00:10:32.991 ' 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.991 --rc genhtml_branch_coverage=1 00:10:32.991 --rc genhtml_function_coverage=1 00:10:32.991 --rc genhtml_legend=1 00:10:32.991 --rc geninfo_all_blocks=1 00:10:32.991 --rc geninfo_unexecuted_blocks=1 00:10:32.991 00:10:32.991 ' 00:10:32.991 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.991 --rc genhtml_branch_coverage=1 00:10:32.991 --rc genhtml_function_coverage=1 00:10:32.991 --rc genhtml_legend=1 00:10:32.991 --rc geninfo_all_blocks=1 00:10:32.991 --rc geninfo_unexecuted_blocks=1 00:10:32.991 00:10:32.991 ' 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.992 18:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:39.562 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:39.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:39.562 Found net devices under 0000:86:00.0: cvl_0_0 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:39.562 Found net devices under 0000:86:00.1: cvl_0_1 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.562 18:48:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.562 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.562 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.562 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.562 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:10:39.562 00:10:39.562 --- 10.0.0.2 ping statistics --- 00:10:39.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.562 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:10:39.562 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:10:39.563 00:10:39.563 --- 10.0.0.1 ping statistics --- 00:10:39.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.563 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3560616 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3560616 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3560616 ']' 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 [2024-11-20 18:48:01.177508] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:10:39.563 [2024-11-20 18:48:01.177554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.563 [2024-11-20 18:48:01.255121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.563 [2024-11-20 18:48:01.297910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.563 [2024-11-20 18:48:01.297944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.563 [2024-11-20 18:48:01.297951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.563 [2024-11-20 18:48:01.297957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.563 [2024-11-20 18:48:01.297962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.563 [2024-11-20 18:48:01.299559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.563 [2024-11-20 18:48:01.299670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.563 [2024-11-20 18:48:01.299777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.563 [2024-11-20 18:48:01.299778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 [2024-11-20 18:48:01.437664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 [2024-11-20 18:48:01.461347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:39.563 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:39.564 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:39.822 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:39.822 18:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:39.822 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:40.079 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:40.079 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:40.079 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:40.079 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:40.079 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:40.079 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:40.079 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:40.595 18:48:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:40.853 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:40.854 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:40.854 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:40.854 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:40.854 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:40.854 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:41.112 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.371 rmmod nvme_tcp 00:10:41.371 rmmod nvme_fabrics 00:10:41.371 rmmod nvme_keyring 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3560616 ']' 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3560616 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3560616 ']' 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3560616 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560616 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560616' 00:10:41.371 killing process with pid 3560616 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3560616 00:10:41.371 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3560616 00:10:41.630 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.630 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.630 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.630 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:41.630 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:41.631 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.631 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.631 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.631 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.631 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.631 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.631 18:48:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.168 00:10:44.168 real 0m10.997s 00:10:44.168 user 0m12.553s 00:10:44.168 sys 0m5.311s 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.168 ************************************ 00:10:44.168 END TEST nvmf_referrals 00:10:44.168 ************************************ 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.168 ************************************ 00:10:44.168 START TEST nvmf_connect_disconnect 00:10:44.168 ************************************ 00:10:44.168 18:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:44.168 * Looking for test storage... 00:10:44.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.168 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.169 --rc genhtml_branch_coverage=1 00:10:44.169 --rc genhtml_function_coverage=1 00:10:44.169 --rc genhtml_legend=1 00:10:44.169 --rc geninfo_all_blocks=1 00:10:44.169 --rc geninfo_unexecuted_blocks=1 00:10:44.169 00:10:44.169 ' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.169 --rc genhtml_branch_coverage=1 00:10:44.169 --rc genhtml_function_coverage=1 00:10:44.169 --rc genhtml_legend=1 00:10:44.169 --rc geninfo_all_blocks=1 00:10:44.169 --rc geninfo_unexecuted_blocks=1 00:10:44.169 00:10:44.169 ' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.169 --rc genhtml_branch_coverage=1 00:10:44.169 --rc genhtml_function_coverage=1 00:10:44.169 --rc genhtml_legend=1 00:10:44.169 --rc geninfo_all_blocks=1 00:10:44.169 --rc geninfo_unexecuted_blocks=1 00:10:44.169 00:10:44.169 ' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.169 --rc genhtml_branch_coverage=1 00:10:44.169 --rc genhtml_function_coverage=1 00:10:44.169 --rc genhtml_legend=1 00:10:44.169 --rc geninfo_all_blocks=1 00:10:44.169 --rc geninfo_unexecuted_blocks=1 00:10:44.169 00:10:44.169 ' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.169 18:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.741 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.741 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.742 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.742 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.742 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.742 18:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:10:50.742 00:10:50.742 --- 10.0.0.2 ping statistics --- 00:10:50.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.742 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:10:50.742 00:10:50.742 --- 10.0.0.1 ping statistics --- 00:10:50.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.742 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3565140 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3565140 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3565140 ']' 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.742 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.742 [2024-11-20 18:48:12.251638] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:10:50.742 [2024-11-20 18:48:12.251687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.742 [2024-11-20 18:48:12.330789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.742 [2024-11-20 18:48:12.372256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.742 [2024-11-20 18:48:12.372292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.742 [2024-11-20 18:48:12.372300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.743 [2024-11-20 18:48:12.372305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.743 [2024-11-20 18:48:12.372310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.743 [2024-11-20 18:48:12.373861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.743 [2024-11-20 18:48:12.373895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.743 [2024-11-20 18:48:12.374000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.743 [2024-11-20 18:48:12.374001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 [2024-11-20 18:48:12.515308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 [2024-11-20 18:48:12.580922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:50.743 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:54.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.157 rmmod nvme_tcp 00:11:07.157 rmmod nvme_fabrics 00:11:07.157 rmmod nvme_keyring 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3565140 ']' 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3565140 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3565140 ']' 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3565140 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3565140 00:11:07.157 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.158 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.158 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3565140' 00:11:07.158 killing process with pid 3565140 00:11:07.158 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3565140 00:11:07.158 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3565140 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.158 18:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.066 00:11:09.066 real 0m25.232s 00:11:09.066 user 1m8.226s 00:11:09.066 sys 0m5.891s 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.066 ************************************ 00:11:09.066 END TEST nvmf_connect_disconnect 00:11:09.066 ************************************ 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.066 ************************************ 00:11:09.066 START TEST nvmf_multitarget 00:11:09.066 ************************************ 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:09.066 * Looking for test storage... 00:11:09.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.066 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.326 --rc genhtml_branch_coverage=1 00:11:09.326 --rc genhtml_function_coverage=1 00:11:09.326 --rc genhtml_legend=1 00:11:09.326 --rc geninfo_all_blocks=1 00:11:09.326 --rc geninfo_unexecuted_blocks=1 00:11:09.326 00:11:09.326 ' 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.326 --rc genhtml_branch_coverage=1 00:11:09.326 --rc genhtml_function_coverage=1 00:11:09.326 --rc genhtml_legend=1 00:11:09.326 --rc geninfo_all_blocks=1 00:11:09.326 --rc geninfo_unexecuted_blocks=1 00:11:09.326 00:11:09.326 ' 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.326 --rc genhtml_branch_coverage=1 00:11:09.326 --rc genhtml_function_coverage=1 00:11:09.326 --rc genhtml_legend=1 00:11:09.326 --rc geninfo_all_blocks=1 00:11:09.326 --rc geninfo_unexecuted_blocks=1 00:11:09.326 00:11:09.326 ' 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.326 --rc genhtml_branch_coverage=1 00:11:09.326 --rc genhtml_function_coverage=1 00:11:09.326 --rc genhtml_legend=1 00:11:09.326 --rc geninfo_all_blocks=1 00:11:09.326 --rc geninfo_unexecuted_blocks=1 00:11:09.326 00:11:09.326 ' 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.326 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.327 18:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:15.900 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:15.900 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:15.900 Found net devices under 0000:86:00.0: cvl_0_0 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:15.900 Found net devices under 0000:86:00.1: cvl_0_1 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.900 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:11:15.901 00:11:15.901 --- 10.0.0.2 ping statistics --- 00:11:15.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.901 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:11:15.901 00:11:15.901 --- 10.0.0.1 ping statistics --- 00:11:15.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.901 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3571510 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3571510 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3571510 ']' 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.901 18:48:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:15.901 [2024-11-20 18:48:37.506745] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:11:15.901 [2024-11-20 18:48:37.506794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.901 [2024-11-20 18:48:37.586732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.901 [2024-11-20 18:48:37.626494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.901 [2024-11-20 18:48:37.626533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.901 [2024-11-20 18:48:37.626540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.901 [2024-11-20 18:48:37.626546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.901 [2024-11-20 18:48:37.626551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.901 [2024-11-20 18:48:37.628147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.901 [2024-11-20 18:48:37.628269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.901 [2024-11-20 18:48:37.628316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.901 [2024-11-20 18:48:37.628316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:16.159 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:16.417 "nvmf_tgt_1" 00:11:16.417 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:16.417 "nvmf_tgt_2" 00:11:16.417 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:16.417 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:16.674 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:16.674 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:16.674 true 00:11:16.675 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:16.934 true 00:11:16.934 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:16.934 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.935 rmmod nvme_tcp 00:11:16.935 rmmod nvme_fabrics 00:11:16.935 rmmod nvme_keyring 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3571510 ']' 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3571510 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3571510 ']' 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3571510 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3571510 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3571510' 00:11:16.935 killing process with pid 3571510 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3571510 00:11:16.935 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3571510 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.194 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.730 00:11:19.730 real 0m10.207s 00:11:19.730 user 0m9.743s 00:11:19.730 sys 0m4.956s 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:19.730 ************************************ 00:11:19.730 END TEST nvmf_multitarget 00:11:19.730 ************************************ 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.730 ************************************ 00:11:19.730 START TEST nvmf_rpc 00:11:19.730 ************************************ 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:19.730 * Looking for test storage... 00:11:19.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.730 --rc genhtml_branch_coverage=1 00:11:19.730 --rc genhtml_function_coverage=1 00:11:19.730 --rc genhtml_legend=1 00:11:19.730 --rc geninfo_all_blocks=1 00:11:19.730 --rc geninfo_unexecuted_blocks=1 00:11:19.730 00:11:19.730 ' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.730 --rc genhtml_branch_coverage=1 00:11:19.730 --rc genhtml_function_coverage=1 00:11:19.730 --rc genhtml_legend=1 00:11:19.730 --rc geninfo_all_blocks=1 00:11:19.730 --rc geninfo_unexecuted_blocks=1 00:11:19.730 00:11:19.730 ' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.730 --rc genhtml_branch_coverage=1 00:11:19.730 --rc genhtml_function_coverage=1 00:11:19.730 --rc genhtml_legend=1 00:11:19.730 --rc geninfo_all_blocks=1 00:11:19.730 --rc geninfo_unexecuted_blocks=1 00:11:19.730 00:11:19.730 ' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.730 --rc genhtml_branch_coverage=1 00:11:19.730 --rc genhtml_function_coverage=1 00:11:19.730 --rc genhtml_legend=1 00:11:19.730 --rc geninfo_all_blocks=1 00:11:19.730 --rc geninfo_unexecuted_blocks=1 00:11:19.730 00:11:19.730 ' 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.730 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.731 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.302 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:26.303 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:26.303 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:26.303 Found net devices under 0000:86:00.0: cvl_0_0 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:26.303 Found net devices under 0000:86:00.1: cvl_0_1 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:11:26.303 00:11:26.303 --- 10.0.0.2 ping statistics --- 00:11:26.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.303 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:11:26.303 00:11:26.303 --- 10.0.0.1 ping statistics --- 00:11:26.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.303 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3575350 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3575350 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3575350 ']' 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.303 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.303 [2024-11-20 18:48:47.846868] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:11:26.303 [2024-11-20 18:48:47.846916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.303 [2024-11-20 18:48:47.926866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.303 [2024-11-20 18:48:47.969130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.303 [2024-11-20 18:48:47.969168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.303 [2024-11-20 18:48:47.969175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.303 [2024-11-20 18:48:47.969181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.303 [2024-11-20 18:48:47.969187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.303 [2024-11-20 18:48:47.970771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.303 [2024-11-20 18:48:47.970882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.303 [2024-11-20 18:48:47.970883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.303 [2024-11-20 18:48:47.970788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:26.304 "tick_rate": 2100000000, 00:11:26.304 "poll_groups": [ 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_000", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [] 00:11:26.304 }, 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_001", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [] 00:11:26.304 }, 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_002", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [] 00:11:26.304 }, 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_003", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [] 00:11:26.304 } 00:11:26.304 ] 00:11:26.304 }' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 [2024-11-20 18:48:48.225699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:26.304 "tick_rate": 2100000000, 00:11:26.304 "poll_groups": [ 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_000", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [ 00:11:26.304 { 00:11:26.304 "trtype": "TCP" 00:11:26.304 } 00:11:26.304 ] 00:11:26.304 }, 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_001", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [ 00:11:26.304 { 00:11:26.304 "trtype": "TCP" 00:11:26.304 } 00:11:26.304 ] 00:11:26.304 }, 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_002", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [ 00:11:26.304 { 00:11:26.304 "trtype": "TCP" 00:11:26.304 } 00:11:26.304 ] 00:11:26.304 }, 00:11:26.304 { 00:11:26.304 "name": "nvmf_tgt_poll_group_003", 00:11:26.304 "admin_qpairs": 0, 00:11:26.304 "io_qpairs": 0, 00:11:26.304 "current_admin_qpairs": 0, 00:11:26.304 "current_io_qpairs": 0, 00:11:26.304 "pending_bdev_io": 0, 00:11:26.304 "completed_nvme_io": 0, 00:11:26.304 "transports": [ 00:11:26.304 { 00:11:26.304 "trtype": "TCP" 00:11:26.304 } 00:11:26.304 ] 00:11:26.304 } 00:11:26.304 ] 00:11:26.304 }' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 Malloc1 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 [2024-11-20 18:48:48.407678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.304 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.305 [2024-11-20 18:48:48.436276] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:26.305 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:26.305 could not add new controller: failed to write to nvme-fabrics device 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.305 18:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.776 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.776 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.776 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.776 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.776 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.679 [2024-11-20 18:48:51.802625] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:29.679 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:29.679 could not add new controller: failed to write to nvme-fabrics device 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.679 18:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.613 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.613 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:30.613 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.613 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:30.613 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:33.141 18:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:33.141 18:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.141 18:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:33.141 18:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:33.141 18:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.141 18:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:33.141 18:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:33.141 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.142 [2024-11-20 18:48:55.114620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.142 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.075 18:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.075 18:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.075 18:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.075 18:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.075 18:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:35.976 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:35.976 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:35.976 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.976 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:35.976 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.976 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:35.976 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.233 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.233 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.234 [2024-11-20 18:48:58.418566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.234 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.606 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.606 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.606 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.606 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.606 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.500 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 [2024-11-20 18:49:01.729243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.501 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.873 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.873 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.873 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.873 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.873 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.774 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 [2024-11-20 18:49:05.026143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.774 18:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.149 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.149 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.149 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.149 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.149 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.050 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.309 [2024-11-20 18:49:08.413099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.309 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.253 18:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.253 18:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.253 18:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.253 18:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.253 18:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 [2024-11-20 18:49:11.682473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 [2024-11-20 18:49:11.730557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 [2024-11-20 18:49:11.778717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 [2024-11-20 18:49:11.826875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 [2024-11-20 18:49:11.875017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:49.785 "tick_rate": 2100000000, 00:11:49.785 "poll_groups": [ 00:11:49.785 { 00:11:49.785 "name": "nvmf_tgt_poll_group_000", 00:11:49.785 "admin_qpairs": 2, 00:11:49.785 "io_qpairs": 168, 00:11:49.785 "current_admin_qpairs": 0, 00:11:49.785 "current_io_qpairs": 0, 00:11:49.785 "pending_bdev_io": 0, 00:11:49.785 "completed_nvme_io": 276, 00:11:49.785 "transports": [ 00:11:49.785 { 00:11:49.785 "trtype": "TCP" 00:11:49.785 } 00:11:49.785 ] 00:11:49.785 }, 00:11:49.785 { 00:11:49.785 "name": "nvmf_tgt_poll_group_001", 00:11:49.785 "admin_qpairs": 2, 00:11:49.785 "io_qpairs": 168, 00:11:49.785 "current_admin_qpairs": 0, 00:11:49.785 "current_io_qpairs": 0, 00:11:49.785 "pending_bdev_io": 0, 00:11:49.785 "completed_nvme_io": 224, 00:11:49.785 "transports": [ 00:11:49.785 { 00:11:49.785 "trtype": "TCP" 00:11:49.785 } 00:11:49.785 ] 00:11:49.785 }, 00:11:49.785 { 00:11:49.785 "name": "nvmf_tgt_poll_group_002", 00:11:49.785 "admin_qpairs": 1, 00:11:49.785 "io_qpairs": 168, 00:11:49.785 "current_admin_qpairs": 0, 00:11:49.785 "current_io_qpairs": 0, 00:11:49.785 "pending_bdev_io": 0, 00:11:49.785 "completed_nvme_io": 267, 00:11:49.785 "transports": [ 00:11:49.785 { 00:11:49.785 "trtype": "TCP" 00:11:49.785 } 00:11:49.785 ] 00:11:49.785 }, 00:11:49.785 { 00:11:49.785 "name": "nvmf_tgt_poll_group_003", 00:11:49.785 "admin_qpairs": 2, 00:11:49.785 "io_qpairs": 168, 00:11:49.785 "current_admin_qpairs": 0, 00:11:49.785 "current_io_qpairs": 0, 00:11:49.785 "pending_bdev_io": 0, 00:11:49.785 "completed_nvme_io": 255, 00:11:49.785 "transports": [ 00:11:49.785 { 00:11:49.785 "trtype": "TCP" 00:11:49.785 } 00:11:49.785 ] 00:11:49.785 } 00:11:49.785 ] 00:11:49.785 }' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:49.785 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.785 rmmod nvme_tcp 00:11:49.785 rmmod nvme_fabrics 00:11:49.785 rmmod nvme_keyring 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3575350 ']' 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3575350 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3575350 ']' 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3575350 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.785 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3575350 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3575350' 00:11:50.045 killing process with pid 3575350 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3575350 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3575350 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.045 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.582 00:11:52.582 real 0m32.827s 00:11:52.582 user 1m38.679s 00:11:52.582 sys 0m6.589s 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.582 ************************************ 00:11:52.582 END TEST nvmf_rpc 00:11:52.582 ************************************ 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.582 ************************************ 00:11:52.582 START TEST nvmf_invalid 00:11:52.582 ************************************ 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.582 * Looking for test storage... 00:11:52.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.582 --rc genhtml_branch_coverage=1 00:11:52.582 --rc genhtml_function_coverage=1 00:11:52.582 --rc genhtml_legend=1 00:11:52.582 --rc geninfo_all_blocks=1 00:11:52.582 --rc geninfo_unexecuted_blocks=1 00:11:52.582 00:11:52.582 ' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.582 --rc genhtml_branch_coverage=1 00:11:52.582 --rc genhtml_function_coverage=1 00:11:52.582 --rc genhtml_legend=1 00:11:52.582 --rc geninfo_all_blocks=1 00:11:52.582 --rc geninfo_unexecuted_blocks=1 00:11:52.582 00:11:52.582 ' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.582 --rc genhtml_branch_coverage=1 00:11:52.582 --rc genhtml_function_coverage=1 00:11:52.582 --rc genhtml_legend=1 00:11:52.582 --rc geninfo_all_blocks=1 00:11:52.582 --rc geninfo_unexecuted_blocks=1 00:11:52.582 00:11:52.582 ' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.582 --rc genhtml_branch_coverage=1 00:11:52.582 --rc genhtml_function_coverage=1 00:11:52.582 --rc genhtml_legend=1 00:11:52.582 --rc geninfo_all_blocks=1 00:11:52.582 --rc geninfo_unexecuted_blocks=1 00:11:52.582 00:11:52.582 ' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.582 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.583 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:59.154 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:59.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.154 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:59.155 Found net devices under 0000:86:00.0: cvl_0_0 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:59.155 Found net devices under 0000:86:00.1: cvl_0_1 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:11:59.155 00:11:59.155 --- 10.0.0.2 ping statistics --- 00:11:59.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.155 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:11:59.155 00:11:59.155 --- 10.0.0.1 ping statistics --- 00:11:59.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.155 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3582966 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3582966 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3582966 ']' 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.155 [2024-11-20 18:49:20.710973] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:11:59.155 [2024-11-20 18:49:20.711020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.155 [2024-11-20 18:49:20.775490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.155 [2024-11-20 18:49:20.820211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.155 [2024-11-20 18:49:20.820244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.155 [2024-11-20 18:49:20.820251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.155 [2024-11-20 18:49:20.820257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.155 [2024-11-20 18:49:20.820262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.155 [2024-11-20 18:49:20.824219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.155 [2024-11-20 18:49:20.824252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.155 [2024-11-20 18:49:20.824360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.155 [2024-11-20 18:49:20.824360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.155 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.156 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.156 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:59.156 18:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10297 00:11:59.156 [2024-11-20 18:49:21.143225] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:59.156 { 00:11:59.156 "nqn": "nqn.2016-06.io.spdk:cnode10297", 00:11:59.156 "tgt_name": "foobar", 00:11:59.156 "method": "nvmf_create_subsystem", 00:11:59.156 "req_id": 1 00:11:59.156 } 00:11:59.156 Got JSON-RPC error response 00:11:59.156 response: 00:11:59.156 { 00:11:59.156 "code": -32603, 00:11:59.156 "message": "Unable to find target foobar" 00:11:59.156 }' 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:59.156 { 00:11:59.156 "nqn": "nqn.2016-06.io.spdk:cnode10297", 00:11:59.156 "tgt_name": "foobar", 00:11:59.156 "method": "nvmf_create_subsystem", 00:11:59.156 "req_id": 1 00:11:59.156 } 00:11:59.156 Got JSON-RPC error response 00:11:59.156 response: 00:11:59.156 { 00:11:59.156 "code": -32603, 00:11:59.156 "message": "Unable to find target foobar" 00:11:59.156 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18403 00:11:59.156 [2024-11-20 18:49:21.364014] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18403: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:59.156 { 00:11:59.156 "nqn": "nqn.2016-06.io.spdk:cnode18403", 00:11:59.156 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:59.156 "method": "nvmf_create_subsystem", 00:11:59.156 "req_id": 1 00:11:59.156 } 00:11:59.156 Got JSON-RPC error response 00:11:59.156 response: 00:11:59.156 { 00:11:59.156 "code": -32602, 00:11:59.156 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:59.156 }' 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:59.156 { 00:11:59.156 "nqn": "nqn.2016-06.io.spdk:cnode18403", 00:11:59.156 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:59.156 "method": "nvmf_create_subsystem", 00:11:59.156 "req_id": 1 00:11:59.156 } 00:11:59.156 Got JSON-RPC error response 00:11:59.156 response: 00:11:59.156 { 00:11:59.156 "code": -32602, 00:11:59.156 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:59.156 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:59.156 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7376 00:11:59.416 [2024-11-20 18:49:21.568683] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7376: invalid model number 'SPDK_Controller' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:59.416 { 00:11:59.416 "nqn": "nqn.2016-06.io.spdk:cnode7376", 00:11:59.416 "model_number": "SPDK_Controller\u001f", 00:11:59.416 "method": "nvmf_create_subsystem", 00:11:59.416 "req_id": 1 00:11:59.416 } 00:11:59.416 Got JSON-RPC error response 00:11:59.416 response: 00:11:59.416 { 00:11:59.416 "code": -32602, 00:11:59.416 "message": "Invalid MN SPDK_Controller\u001f" 00:11:59.416 }' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:59.416 { 00:11:59.416 "nqn": "nqn.2016-06.io.spdk:cnode7376", 00:11:59.416 "model_number": "SPDK_Controller\u001f", 00:11:59.416 "method": "nvmf_create_subsystem", 00:11:59.416 "req_id": 1 00:11:59.416 } 00:11:59.416 Got JSON-RPC error response 00:11:59.416 response: 00:11:59.416 { 00:11:59.416 "code": -32602, 00:11:59.416 "message": "Invalid MN SPDK_Controller\u001f" 00:11:59.416 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.416 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:59.417 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'c`tRcPVeE{;B3HM.O.T{' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'c`tRcPVeE{;B3HM.O.T{' nqn.2016-06.io.spdk:cnode14788 00:11:59.676 [2024-11-20 18:49:21.905829] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14788: invalid serial number 'c`tRcPVeE{;B3HM.O.T{' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:59.676 { 00:11:59.676 "nqn": "nqn.2016-06.io.spdk:cnode14788", 00:11:59.676 "serial_number": "c`tRcPV\u007feE{;B3HM.O.T{", 00:11:59.676 "method": "nvmf_create_subsystem", 00:11:59.676 "req_id": 1 00:11:59.676 } 00:11:59.676 Got JSON-RPC error response 00:11:59.676 response: 00:11:59.676 { 00:11:59.676 "code": -32602, 00:11:59.676 "message": "Invalid SN c`tRcPV\u007feE{;B3HM.O.T{" 00:11:59.676 }' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:59.676 { 00:11:59.676 "nqn": "nqn.2016-06.io.spdk:cnode14788", 00:11:59.676 "serial_number": "c`tRcPV\u007feE{;B3HM.O.T{", 00:11:59.676 "method": "nvmf_create_subsystem", 00:11:59.676 "req_id": 1 00:11:59.676 } 00:11:59.676 Got JSON-RPC error response 00:11:59.676 response: 00:11:59.676 { 00:11:59.676 "code": -32602, 00:11:59.676 "message": "Invalid SN c`tRcPV\u007feE{;B3HM.O.T{" 00:11:59.676 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.676 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.677 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:59.936 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:59.936 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:59.936 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.936 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.936 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:59.936 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:59.936 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:59.936 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:59.937 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Kj<,c>Ff"*zjoLDc$;9fVBUpO CmSSaS[(' 00:11:59.938 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Kj<,c>Ff"*zjoLDc$;9fVBUpO CmSSaS[(' nqn.2016-06.io.spdk:cnode25019 00:12:00.196 [2024-11-20 18:49:22.383403] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25019: invalid model number 'Kj<,c>Ff"*zjoLDc$;9fVBUpO CmSSaS[(' 00:12:00.196 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:00.196 { 00:12:00.196 "nqn": "nqn.2016-06.io.spdk:cnode25019", 00:12:00.196 "model_number": "Kj<,c>Ff\"*zjoLDc$;9fVBUpO CmSSaS[(", 00:12:00.196 "method": "nvmf_create_subsystem", 00:12:00.196 "req_id": 1 00:12:00.196 } 00:12:00.196 Got JSON-RPC error response 00:12:00.196 response: 00:12:00.196 { 00:12:00.196 "code": -32602, 00:12:00.196 "message": "Invalid MN Kj<,c>Ff\"*zjoLDc$;9fVBUpO CmSSaS[(" 00:12:00.196 }' 00:12:00.196 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:00.196 { 00:12:00.196 "nqn": "nqn.2016-06.io.spdk:cnode25019", 00:12:00.196 "model_number": "Kj<,c>Ff\"*zjoLDc$;9fVBUpO CmSSaS[(", 00:12:00.196 "method": "nvmf_create_subsystem", 00:12:00.196 "req_id": 1 00:12:00.196 } 00:12:00.196 Got JSON-RPC error response 00:12:00.196 response: 00:12:00.196 { 00:12:00.196 "code": -32602, 00:12:00.196 "message": "Invalid MN Kj<,c>Ff\"*zjoLDc$;9fVBUpO CmSSaS[(" 00:12:00.196 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:00.196 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:00.454 [2024-11-20 18:49:22.600198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.454 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:00.713 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:00.713 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:00.713 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:00.713 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:00.713 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:00.713 [2024-11-20 18:49:23.009542] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:00.971 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:00.971 { 00:12:00.971 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:00.971 "listen_address": { 00:12:00.971 "trtype": "tcp", 00:12:00.971 "traddr": "", 00:12:00.971 "trsvcid": "4421" 00:12:00.971 }, 00:12:00.971 "method": "nvmf_subsystem_remove_listener", 00:12:00.971 "req_id": 1 00:12:00.971 } 00:12:00.971 Got JSON-RPC error response 00:12:00.971 response: 00:12:00.971 { 00:12:00.971 "code": -32602, 00:12:00.971 "message": "Invalid parameters" 00:12:00.971 }' 00:12:00.971 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:00.971 { 00:12:00.971 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:00.971 "listen_address": { 00:12:00.971 "trtype": "tcp", 00:12:00.971 "traddr": "", 00:12:00.971 "trsvcid": "4421" 00:12:00.971 }, 00:12:00.971 "method": "nvmf_subsystem_remove_listener", 00:12:00.971 "req_id": 1 00:12:00.971 } 00:12:00.971 Got JSON-RPC error response 00:12:00.971 response: 00:12:00.971 { 00:12:00.971 "code": -32602, 00:12:00.971 "message": "Invalid parameters" 00:12:00.971 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:00.971 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27106 -i 0 00:12:00.971 [2024-11-20 18:49:23.206177] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27106: invalid cntlid range [0-65519] 00:12:00.971 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:00.971 { 00:12:00.971 "nqn": "nqn.2016-06.io.spdk:cnode27106", 00:12:00.971 "min_cntlid": 0, 00:12:00.971 "method": "nvmf_create_subsystem", 00:12:00.971 "req_id": 1 00:12:00.971 } 00:12:00.971 Got JSON-RPC error response 00:12:00.971 response: 00:12:00.971 { 00:12:00.971 "code": -32602, 00:12:00.971 "message": "Invalid cntlid range [0-65519]" 00:12:00.971 }' 00:12:00.971 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:00.971 { 00:12:00.971 "nqn": "nqn.2016-06.io.spdk:cnode27106", 00:12:00.971 "min_cntlid": 0, 00:12:00.971 "method": "nvmf_create_subsystem", 00:12:00.971 "req_id": 1 00:12:00.971 } 00:12:00.971 Got JSON-RPC error response 00:12:00.971 response: 00:12:00.971 { 00:12:00.971 "code": -32602, 00:12:00.971 "message": "Invalid cntlid range [0-65519]" 00:12:00.971 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:00.971 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21110 -i 65520 00:12:01.230 [2024-11-20 18:49:23.398821] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21110: invalid cntlid range [65520-65519] 00:12:01.230 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:01.230 { 00:12:01.230 "nqn": "nqn.2016-06.io.spdk:cnode21110", 00:12:01.230 "min_cntlid": 65520, 00:12:01.230 "method": "nvmf_create_subsystem", 00:12:01.230 "req_id": 1 00:12:01.230 } 00:12:01.230 Got JSON-RPC error response 00:12:01.230 response: 00:12:01.230 { 00:12:01.230 "code": -32602, 00:12:01.230 "message": "Invalid cntlid range [65520-65519]" 00:12:01.230 }' 00:12:01.230 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:01.230 { 00:12:01.230 "nqn": "nqn.2016-06.io.spdk:cnode21110", 00:12:01.230 "min_cntlid": 65520, 00:12:01.230 "method": "nvmf_create_subsystem", 00:12:01.230 "req_id": 1 00:12:01.230 } 00:12:01.230 Got JSON-RPC error response 00:12:01.230 response: 00:12:01.230 { 00:12:01.230 "code": -32602, 00:12:01.230 "message": "Invalid cntlid range [65520-65519]" 00:12:01.230 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:01.230 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27865 -I 0 00:12:01.488 [2024-11-20 18:49:23.595484] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27865: invalid cntlid range [1-0] 00:12:01.488 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:01.488 { 00:12:01.488 "nqn": "nqn.2016-06.io.spdk:cnode27865", 00:12:01.488 "max_cntlid": 0, 00:12:01.488 "method": "nvmf_create_subsystem", 00:12:01.488 "req_id": 1 00:12:01.488 } 00:12:01.488 Got JSON-RPC error response 00:12:01.488 response: 00:12:01.488 { 00:12:01.488 "code": -32602, 00:12:01.488 "message": "Invalid cntlid range [1-0]" 00:12:01.488 }' 00:12:01.488 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:01.488 { 00:12:01.488 "nqn": "nqn.2016-06.io.spdk:cnode27865", 00:12:01.488 "max_cntlid": 0, 00:12:01.488 "method": "nvmf_create_subsystem", 00:12:01.488 "req_id": 1 00:12:01.488 } 00:12:01.488 Got JSON-RPC error response 00:12:01.488 response: 00:12:01.488 { 00:12:01.488 "code": -32602, 00:12:01.488 "message": "Invalid cntlid range [1-0]" 00:12:01.488 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:01.488 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16572 -I 65520 00:12:01.488 [2024-11-20 18:49:23.796166] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16572: invalid cntlid range [1-65520] 00:12:01.747 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:01.747 { 00:12:01.747 "nqn": "nqn.2016-06.io.spdk:cnode16572", 00:12:01.747 "max_cntlid": 65520, 00:12:01.747 "method": "nvmf_create_subsystem", 00:12:01.747 "req_id": 1 00:12:01.747 } 00:12:01.747 Got JSON-RPC error response 00:12:01.747 response: 00:12:01.747 { 00:12:01.747 "code": -32602, 00:12:01.747 "message": "Invalid cntlid range [1-65520]" 00:12:01.747 }' 00:12:01.747 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:01.747 { 00:12:01.747 "nqn": "nqn.2016-06.io.spdk:cnode16572", 00:12:01.747 "max_cntlid": 65520, 00:12:01.747 "method": "nvmf_create_subsystem", 00:12:01.747 "req_id": 1 00:12:01.747 } 00:12:01.747 Got JSON-RPC error response 00:12:01.747 response: 00:12:01.747 { 00:12:01.747 "code": -32602, 00:12:01.747 "message": "Invalid cntlid range [1-65520]" 00:12:01.747 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:01.747 18:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12641 -i 6 -I 5 00:12:01.747 [2024-11-20 18:49:24.008928] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12641: invalid cntlid range [6-5] 00:12:01.747 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:01.747 { 00:12:01.747 "nqn": "nqn.2016-06.io.spdk:cnode12641", 00:12:01.747 "min_cntlid": 6, 00:12:01.747 "max_cntlid": 5, 00:12:01.747 "method": "nvmf_create_subsystem", 00:12:01.747 "req_id": 1 00:12:01.747 } 00:12:01.747 Got JSON-RPC error response 00:12:01.747 response: 00:12:01.747 { 00:12:01.747 "code": -32602, 00:12:01.747 "message": "Invalid cntlid range [6-5]" 00:12:01.747 }' 00:12:01.747 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:01.747 { 00:12:01.747 "nqn": "nqn.2016-06.io.spdk:cnode12641", 00:12:01.747 "min_cntlid": 6, 00:12:01.747 "max_cntlid": 5, 00:12:01.747 "method": "nvmf_create_subsystem", 00:12:01.747 "req_id": 1 00:12:01.747 } 00:12:01.747 Got JSON-RPC error response 00:12:01.747 response: 00:12:01.747 { 00:12:01.747 "code": -32602, 00:12:01.747 "message": "Invalid cntlid range [6-5]" 00:12:01.747 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:01.748 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:02.007 { 00:12:02.007 "name": "foobar", 00:12:02.007 "method": "nvmf_delete_target", 00:12:02.007 "req_id": 1 00:12:02.007 } 00:12:02.007 Got JSON-RPC error response 00:12:02.007 response: 00:12:02.007 { 00:12:02.007 "code": -32602, 00:12:02.007 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:02.007 }' 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:02.007 { 00:12:02.007 "name": "foobar", 00:12:02.007 "method": "nvmf_delete_target", 00:12:02.007 "req_id": 1 00:12:02.007 } 00:12:02.007 Got JSON-RPC error response 00:12:02.007 response: 00:12:02.007 { 00:12:02.007 "code": -32602, 00:12:02.007 "message": "The specified target doesn't exist, cannot delete it." 00:12:02.007 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.007 rmmod nvme_tcp 00:12:02.007 rmmod nvme_fabrics 00:12:02.007 rmmod nvme_keyring 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3582966 ']' 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3582966 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3582966 ']' 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3582966 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3582966 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3582966' 00:12:02.007 killing process with pid 3582966 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3582966 00:12:02.007 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3582966 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.267 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.802 00:12:04.802 real 0m12.052s 00:12:04.802 user 0m18.617s 00:12:04.802 sys 0m5.478s 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.802 ************************************ 00:12:04.802 END TEST nvmf_invalid 00:12:04.802 ************************************ 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.802 ************************************ 00:12:04.802 START TEST nvmf_connect_stress 00:12:04.802 ************************************ 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:04.802 * Looking for test storage... 00:12:04.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.802 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.803 --rc genhtml_branch_coverage=1 00:12:04.803 --rc genhtml_function_coverage=1 00:12:04.803 --rc genhtml_legend=1 00:12:04.803 --rc geninfo_all_blocks=1 00:12:04.803 --rc geninfo_unexecuted_blocks=1 00:12:04.803 00:12:04.803 ' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.803 --rc genhtml_branch_coverage=1 00:12:04.803 --rc genhtml_function_coverage=1 00:12:04.803 --rc genhtml_legend=1 00:12:04.803 --rc geninfo_all_blocks=1 00:12:04.803 --rc geninfo_unexecuted_blocks=1 00:12:04.803 00:12:04.803 ' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.803 --rc genhtml_branch_coverage=1 00:12:04.803 --rc genhtml_function_coverage=1 00:12:04.803 --rc genhtml_legend=1 00:12:04.803 --rc geninfo_all_blocks=1 00:12:04.803 --rc geninfo_unexecuted_blocks=1 00:12:04.803 00:12:04.803 ' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.803 --rc genhtml_branch_coverage=1 00:12:04.803 --rc genhtml_function_coverage=1 00:12:04.803 --rc genhtml_legend=1 00:12:04.803 --rc geninfo_all_blocks=1 00:12:04.803 --rc geninfo_unexecuted_blocks=1 00:12:04.803 00:12:04.803 ' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.803 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:11.377 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.377 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:11.378 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:11.378 Found net devices under 0000:86:00.0: cvl_0_0 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:11.378 Found net devices under 0000:86:00.1: cvl_0_1 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:12:11.378 00:12:11.378 --- 10.0.0.2 ping statistics --- 00:12:11.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.378 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:11.378 00:12:11.378 --- 10.0.0.1 ping statistics --- 00:12:11.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.378 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.378 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3587352 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3587352 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3587352 ']' 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.379 18:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 [2024-11-20 18:49:32.839478] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:12:11.379 [2024-11-20 18:49:32.839523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.379 [2024-11-20 18:49:32.919337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.379 [2024-11-20 18:49:32.960024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.379 [2024-11-20 18:49:32.960063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.379 [2024-11-20 18:49:32.960070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.379 [2024-11-20 18:49:32.960076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.379 [2024-11-20 18:49:32.960081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.379 [2024-11-20 18:49:32.963219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.379 [2024-11-20 18:49:32.963312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.379 [2024-11-20 18:49:32.963312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 [2024-11-20 18:49:33.107348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 [2024-11-20 18:49:33.127582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.379 NULL1 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3587373 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.379 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.380 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.638 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.638 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:11.638 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.638 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.638 18:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.896 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.896 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:11.896 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.896 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.896 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.462 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.462 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:12.462 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.462 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.462 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.720 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.720 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:12.720 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.720 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.720 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.978 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.978 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:12.978 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.978 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.978 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.236 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:13.236 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.236 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.236 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.494 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.494 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:13.494 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.494 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.494 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.060 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.060 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:14.060 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.060 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.060 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.318 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.318 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:14.318 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.318 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.318 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.576 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.576 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:14.576 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.576 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.576 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.834 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.834 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:14.834 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.834 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.834 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.398 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.398 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:15.398 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.398 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.398 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.656 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.656 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:15.656 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.656 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.656 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.914 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.914 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:15.914 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.914 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.914 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.172 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.172 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:16.172 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.172 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.172 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.430 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.430 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:16.430 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.430 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.430 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.997 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.997 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:16.997 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.997 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.997 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.258 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.258 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:17.258 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.258 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.258 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.515 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.515 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:17.515 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.515 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.515 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.772 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.772 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:17.772 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.772 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.772 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.338 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.339 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:18.339 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.339 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.339 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.597 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.597 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:18.597 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.597 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.597 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.856 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.856 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:18.856 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.856 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.856 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.113 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.113 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:19.114 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.114 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.114 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.372 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.372 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:19.372 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.372 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.372 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.939 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:19.939 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.939 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.939 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.198 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.198 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:20.198 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.198 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.198 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.538 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.538 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:20.538 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.538 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.538 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.864 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.864 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:20.864 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.864 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.864 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.148 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3587373 00:12:21.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3587373) - No such process 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3587373 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.148 rmmod nvme_tcp 00:12:21.148 rmmod nvme_fabrics 00:12:21.148 rmmod nvme_keyring 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3587352 ']' 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3587352 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3587352 ']' 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3587352 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3587352 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3587352' 00:12:21.148 killing process with pid 3587352 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3587352 00:12:21.148 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3587352 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.408 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.943 00:12:23.943 real 0m19.065s 00:12:23.943 user 0m39.324s 00:12:23.943 sys 0m8.584s 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.943 ************************************ 00:12:23.943 END TEST nvmf_connect_stress 00:12:23.943 ************************************ 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.943 ************************************ 00:12:23.943 START TEST nvmf_fused_ordering 00:12:23.943 ************************************ 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:23.943 * Looking for test storage... 00:12:23.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.943 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.944 --rc genhtml_branch_coverage=1 00:12:23.944 --rc genhtml_function_coverage=1 00:12:23.944 --rc genhtml_legend=1 00:12:23.944 --rc geninfo_all_blocks=1 00:12:23.944 --rc geninfo_unexecuted_blocks=1 00:12:23.944 00:12:23.944 ' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.944 --rc genhtml_branch_coverage=1 00:12:23.944 --rc genhtml_function_coverage=1 00:12:23.944 --rc genhtml_legend=1 00:12:23.944 --rc geninfo_all_blocks=1 00:12:23.944 --rc geninfo_unexecuted_blocks=1 00:12:23.944 00:12:23.944 ' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.944 --rc genhtml_branch_coverage=1 00:12:23.944 --rc genhtml_function_coverage=1 00:12:23.944 --rc genhtml_legend=1 00:12:23.944 --rc geninfo_all_blocks=1 00:12:23.944 --rc geninfo_unexecuted_blocks=1 00:12:23.944 00:12:23.944 ' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.944 --rc genhtml_branch_coverage=1 00:12:23.944 --rc genhtml_function_coverage=1 00:12:23.944 --rc genhtml_legend=1 00:12:23.944 --rc geninfo_all_blocks=1 00:12:23.944 --rc geninfo_unexecuted_blocks=1 00:12:23.944 00:12:23.944 ' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.944 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.945 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.945 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.945 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.945 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.945 18:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.514 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:30.515 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:30.515 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:30.515 Found net devices under 0000:86:00.0: cvl_0_0 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:30.515 Found net devices under 0000:86:00.1: cvl_0_1 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:12:30.515 00:12:30.515 --- 10.0.0.2 ping statistics --- 00:12:30.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.515 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:30.515 00:12:30.515 --- 10.0.0.1 ping statistics --- 00:12:30.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.515 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.515 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3592536 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3592536 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3592536 ']' 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.516 18:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 [2024-11-20 18:49:52.025041] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:12:30.516 [2024-11-20 18:49:52.025084] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.516 [2024-11-20 18:49:52.103254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.516 [2024-11-20 18:49:52.143750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.516 [2024-11-20 18:49:52.143787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.516 [2024-11-20 18:49:52.143794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.516 [2024-11-20 18:49:52.143799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.516 [2024-11-20 18:49:52.143805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.516 [2024-11-20 18:49:52.144378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 [2024-11-20 18:49:52.280754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 [2024-11-20 18:49:52.300942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 NULL1 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.516 18:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:30.516 [2024-11-20 18:49:52.361225] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:12:30.516 [2024-11-20 18:49:52.361268] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592721 ] 00:12:30.516 Attached to nqn.2016-06.io.spdk:cnode1 00:12:30.516 Namespace ID: 1 size: 1GB 00:12:30.516 fused_ordering(0) 00:12:30.516 fused_ordering(1) 00:12:30.516 fused_ordering(2) 00:12:30.516 fused_ordering(3) 00:12:30.516 fused_ordering(4) 00:12:30.516 fused_ordering(5) 00:12:30.516 fused_ordering(6) 00:12:30.516 fused_ordering(7) 00:12:30.516 fused_ordering(8) 00:12:30.516 fused_ordering(9) 00:12:30.516 fused_ordering(10) 00:12:30.516 fused_ordering(11) 00:12:30.516 fused_ordering(12) 00:12:30.516 fused_ordering(13) 00:12:30.516 fused_ordering(14) 00:12:30.516 fused_ordering(15) 00:12:30.516 fused_ordering(16) 00:12:30.516 fused_ordering(17) 00:12:30.516 fused_ordering(18) 00:12:30.516 fused_ordering(19) 00:12:30.516 fused_ordering(20) 00:12:30.516 fused_ordering(21) 00:12:30.516 fused_ordering(22) 00:12:30.516 fused_ordering(23) 00:12:30.516 fused_ordering(24) 00:12:30.516 fused_ordering(25) 00:12:30.516 fused_ordering(26) 00:12:30.516 fused_ordering(27) 00:12:30.516 fused_ordering(28) 00:12:30.516 fused_ordering(29) 00:12:30.516 fused_ordering(30) 00:12:30.516 fused_ordering(31) 00:12:30.516 fused_ordering(32) 00:12:30.516 fused_ordering(33) 00:12:30.516 fused_ordering(34) 00:12:30.516 fused_ordering(35) 00:12:30.516 fused_ordering(36) 00:12:30.516 fused_ordering(37) 00:12:30.516 fused_ordering(38) 00:12:30.516 fused_ordering(39) 00:12:30.516 fused_ordering(40) 00:12:30.516 fused_ordering(41) 00:12:30.516 fused_ordering(42) 00:12:30.516 fused_ordering(43) 00:12:30.516 fused_ordering(44) 00:12:30.516 fused_ordering(45) 00:12:30.516 fused_ordering(46) 00:12:30.516 fused_ordering(47) 00:12:30.516 fused_ordering(48) 00:12:30.516 fused_ordering(49) 00:12:30.516 fused_ordering(50) 00:12:30.516 fused_ordering(51) 00:12:30.516 fused_ordering(52) 00:12:30.516 fused_ordering(53) 00:12:30.516 fused_ordering(54) 00:12:30.516 fused_ordering(55) 00:12:30.516 fused_ordering(56) 00:12:30.516 fused_ordering(57) 00:12:30.516 fused_ordering(58) 00:12:30.516 fused_ordering(59) 00:12:30.516 fused_ordering(60) 00:12:30.516 fused_ordering(61) 00:12:30.516 fused_ordering(62) 00:12:30.516 fused_ordering(63) 00:12:30.516 fused_ordering(64) 00:12:30.516 fused_ordering(65) 00:12:30.516 fused_ordering(66) 00:12:30.516 fused_ordering(67) 00:12:30.516 fused_ordering(68) 00:12:30.516 fused_ordering(69) 00:12:30.516 fused_ordering(70) 00:12:30.516 fused_ordering(71) 00:12:30.516 fused_ordering(72) 00:12:30.516 fused_ordering(73) 00:12:30.516 fused_ordering(74) 00:12:30.516 fused_ordering(75) 00:12:30.516 fused_ordering(76) 00:12:30.516 fused_ordering(77) 00:12:30.516 fused_ordering(78) 00:12:30.516 fused_ordering(79) 00:12:30.516 fused_ordering(80) 00:12:30.516 fused_ordering(81) 00:12:30.516 fused_ordering(82) 00:12:30.516 fused_ordering(83) 00:12:30.516 fused_ordering(84) 00:12:30.516 fused_ordering(85) 00:12:30.516 fused_ordering(86) 00:12:30.516 fused_ordering(87) 00:12:30.516 fused_ordering(88) 00:12:30.516 fused_ordering(89) 00:12:30.516 fused_ordering(90) 00:12:30.516 fused_ordering(91) 00:12:30.516 fused_ordering(92) 00:12:30.516 fused_ordering(93) 00:12:30.516 fused_ordering(94) 00:12:30.516 fused_ordering(95) 00:12:30.516 fused_ordering(96) 00:12:30.516 fused_ordering(97) 00:12:30.516 fused_ordering(98) 00:12:30.516 fused_ordering(99) 00:12:30.516 fused_ordering(100) 00:12:30.516 fused_ordering(101) 00:12:30.517 fused_ordering(102) 00:12:30.517 fused_ordering(103) 00:12:30.517 fused_ordering(104) 00:12:30.517 fused_ordering(105) 00:12:30.517 fused_ordering(106) 00:12:30.517 fused_ordering(107) 00:12:30.517 fused_ordering(108) 00:12:30.517 fused_ordering(109) 00:12:30.517 fused_ordering(110) 00:12:30.517 fused_ordering(111) 00:12:30.517 fused_ordering(112) 00:12:30.517 fused_ordering(113) 00:12:30.517 fused_ordering(114) 00:12:30.517 fused_ordering(115) 00:12:30.517 fused_ordering(116) 00:12:30.517 fused_ordering(117) 00:12:30.517 fused_ordering(118) 00:12:30.517 fused_ordering(119) 00:12:30.517 fused_ordering(120) 00:12:30.517 fused_ordering(121) 00:12:30.517 fused_ordering(122) 00:12:30.517 fused_ordering(123) 00:12:30.517 fused_ordering(124) 00:12:30.517 fused_ordering(125) 00:12:30.517 fused_ordering(126) 00:12:30.517 fused_ordering(127) 00:12:30.517 fused_ordering(128) 00:12:30.517 fused_ordering(129) 00:12:30.517 fused_ordering(130) 00:12:30.517 fused_ordering(131) 00:12:30.517 fused_ordering(132) 00:12:30.517 fused_ordering(133) 00:12:30.517 fused_ordering(134) 00:12:30.517 fused_ordering(135) 00:12:30.517 fused_ordering(136) 00:12:30.517 fused_ordering(137) 00:12:30.517 fused_ordering(138) 00:12:30.517 fused_ordering(139) 00:12:30.517 fused_ordering(140) 00:12:30.517 fused_ordering(141) 00:12:30.517 fused_ordering(142) 00:12:30.517 fused_ordering(143) 00:12:30.517 fused_ordering(144) 00:12:30.517 fused_ordering(145) 00:12:30.517 fused_ordering(146) 00:12:30.517 fused_ordering(147) 00:12:30.517 fused_ordering(148) 00:12:30.517 fused_ordering(149) 00:12:30.517 fused_ordering(150) 00:12:30.517 fused_ordering(151) 00:12:30.517 fused_ordering(152) 00:12:30.517 fused_ordering(153) 00:12:30.517 fused_ordering(154) 00:12:30.517 fused_ordering(155) 00:12:30.517 fused_ordering(156) 00:12:30.517 fused_ordering(157) 00:12:30.517 fused_ordering(158) 00:12:30.517 fused_ordering(159) 00:12:30.517 fused_ordering(160) 00:12:30.517 fused_ordering(161) 00:12:30.517 fused_ordering(162) 00:12:30.517 fused_ordering(163) 00:12:30.517 fused_ordering(164) 00:12:30.517 fused_ordering(165) 00:12:30.517 fused_ordering(166) 00:12:30.517 fused_ordering(167) 00:12:30.517 fused_ordering(168) 00:12:30.517 fused_ordering(169) 00:12:30.517 fused_ordering(170) 00:12:30.517 fused_ordering(171) 00:12:30.517 fused_ordering(172) 00:12:30.517 fused_ordering(173) 00:12:30.517 fused_ordering(174) 00:12:30.517 fused_ordering(175) 00:12:30.517 fused_ordering(176) 00:12:30.517 fused_ordering(177) 00:12:30.517 fused_ordering(178) 00:12:30.517 fused_ordering(179) 00:12:30.517 fused_ordering(180) 00:12:30.517 fused_ordering(181) 00:12:30.517 fused_ordering(182) 00:12:30.517 fused_ordering(183) 00:12:30.517 fused_ordering(184) 00:12:30.517 fused_ordering(185) 00:12:30.517 fused_ordering(186) 00:12:30.517 fused_ordering(187) 00:12:30.517 fused_ordering(188) 00:12:30.517 fused_ordering(189) 00:12:30.517 fused_ordering(190) 00:12:30.517 fused_ordering(191) 00:12:30.517 fused_ordering(192) 00:12:30.517 fused_ordering(193) 00:12:30.517 fused_ordering(194) 00:12:30.517 fused_ordering(195) 00:12:30.517 fused_ordering(196) 00:12:30.517 fused_ordering(197) 00:12:30.517 fused_ordering(198) 00:12:30.517 fused_ordering(199) 00:12:30.517 fused_ordering(200) 00:12:30.517 fused_ordering(201) 00:12:30.517 fused_ordering(202) 00:12:30.517 fused_ordering(203) 00:12:30.517 fused_ordering(204) 00:12:30.517 fused_ordering(205) 00:12:30.775 fused_ordering(206) 00:12:30.775 fused_ordering(207) 00:12:30.775 fused_ordering(208) 00:12:30.775 fused_ordering(209) 00:12:30.775 fused_ordering(210) 00:12:30.775 fused_ordering(211) 00:12:30.775 fused_ordering(212) 00:12:30.775 fused_ordering(213) 00:12:30.776 fused_ordering(214) 00:12:30.776 fused_ordering(215) 00:12:30.776 fused_ordering(216) 00:12:30.776 fused_ordering(217) 00:12:30.776 fused_ordering(218) 00:12:30.776 fused_ordering(219) 00:12:30.776 fused_ordering(220) 00:12:30.776 fused_ordering(221) 00:12:30.776 fused_ordering(222) 00:12:30.776 fused_ordering(223) 00:12:30.776 fused_ordering(224) 00:12:30.776 fused_ordering(225) 00:12:30.776 fused_ordering(226) 00:12:30.776 fused_ordering(227) 00:12:30.776 fused_ordering(228) 00:12:30.776 fused_ordering(229) 00:12:30.776 fused_ordering(230) 00:12:30.776 fused_ordering(231) 00:12:30.776 fused_ordering(232) 00:12:30.776 fused_ordering(233) 00:12:30.776 fused_ordering(234) 00:12:30.776 fused_ordering(235) 00:12:30.776 fused_ordering(236) 00:12:30.776 fused_ordering(237) 00:12:30.776 fused_ordering(238) 00:12:30.776 fused_ordering(239) 00:12:30.776 fused_ordering(240) 00:12:30.776 fused_ordering(241) 00:12:30.776 fused_ordering(242) 00:12:30.776 fused_ordering(243) 00:12:30.776 fused_ordering(244) 00:12:30.776 fused_ordering(245) 00:12:30.776 fused_ordering(246) 00:12:30.776 fused_ordering(247) 00:12:30.776 fused_ordering(248) 00:12:30.776 fused_ordering(249) 00:12:30.776 fused_ordering(250) 00:12:30.776 fused_ordering(251) 00:12:30.776 fused_ordering(252) 00:12:30.776 fused_ordering(253) 00:12:30.776 fused_ordering(254) 00:12:30.776 fused_ordering(255) 00:12:30.776 fused_ordering(256) 00:12:30.776 fused_ordering(257) 00:12:30.776 fused_ordering(258) 00:12:30.776 fused_ordering(259) 00:12:30.776 fused_ordering(260) 00:12:30.776 fused_ordering(261) 00:12:30.776 fused_ordering(262) 00:12:30.776 fused_ordering(263) 00:12:30.776 fused_ordering(264) 00:12:30.776 fused_ordering(265) 00:12:30.776 fused_ordering(266) 00:12:30.776 fused_ordering(267) 00:12:30.776 fused_ordering(268) 00:12:30.776 fused_ordering(269) 00:12:30.776 fused_ordering(270) 00:12:30.776 fused_ordering(271) 00:12:30.776 fused_ordering(272) 00:12:30.776 fused_ordering(273) 00:12:30.776 fused_ordering(274) 00:12:30.776 fused_ordering(275) 00:12:30.776 fused_ordering(276) 00:12:30.776 fused_ordering(277) 00:12:30.776 fused_ordering(278) 00:12:30.776 fused_ordering(279) 00:12:30.776 fused_ordering(280) 00:12:30.776 fused_ordering(281) 00:12:30.776 fused_ordering(282) 00:12:30.776 fused_ordering(283) 00:12:30.776 fused_ordering(284) 00:12:30.776 fused_ordering(285) 00:12:30.776 fused_ordering(286) 00:12:30.776 fused_ordering(287) 00:12:30.776 fused_ordering(288) 00:12:30.776 fused_ordering(289) 00:12:30.776 fused_ordering(290) 00:12:30.776 fused_ordering(291) 00:12:30.776 fused_ordering(292) 00:12:30.776 fused_ordering(293) 00:12:30.776 fused_ordering(294) 00:12:30.776 fused_ordering(295) 00:12:30.776 fused_ordering(296) 00:12:30.776 fused_ordering(297) 00:12:30.776 fused_ordering(298) 00:12:30.776 fused_ordering(299) 00:12:30.776 fused_ordering(300) 00:12:30.776 fused_ordering(301) 00:12:30.776 fused_ordering(302) 00:12:30.776 fused_ordering(303) 00:12:30.776 fused_ordering(304) 00:12:30.776 fused_ordering(305) 00:12:30.776 fused_ordering(306) 00:12:30.776 fused_ordering(307) 00:12:30.776 fused_ordering(308) 00:12:30.776 fused_ordering(309) 00:12:30.776 fused_ordering(310) 00:12:30.776 fused_ordering(311) 00:12:30.776 fused_ordering(312) 00:12:30.776 fused_ordering(313) 00:12:30.776 fused_ordering(314) 00:12:30.776 fused_ordering(315) 00:12:30.776 fused_ordering(316) 00:12:30.776 fused_ordering(317) 00:12:30.776 fused_ordering(318) 00:12:30.776 fused_ordering(319) 00:12:30.776 fused_ordering(320) 00:12:30.776 fused_ordering(321) 00:12:30.776 fused_ordering(322) 00:12:30.776 fused_ordering(323) 00:12:30.776 fused_ordering(324) 00:12:30.776 fused_ordering(325) 00:12:30.776 fused_ordering(326) 00:12:30.776 fused_ordering(327) 00:12:30.776 fused_ordering(328) 00:12:30.776 fused_ordering(329) 00:12:30.776 fused_ordering(330) 00:12:30.776 fused_ordering(331) 00:12:30.776 fused_ordering(332) 00:12:30.776 fused_ordering(333) 00:12:30.776 fused_ordering(334) 00:12:30.776 fused_ordering(335) 00:12:30.776 fused_ordering(336) 00:12:30.776 fused_ordering(337) 00:12:30.776 fused_ordering(338) 00:12:30.776 fused_ordering(339) 00:12:30.776 fused_ordering(340) 00:12:30.776 fused_ordering(341) 00:12:30.776 fused_ordering(342) 00:12:30.776 fused_ordering(343) 00:12:30.776 fused_ordering(344) 00:12:30.776 fused_ordering(345) 00:12:30.776 fused_ordering(346) 00:12:30.776 fused_ordering(347) 00:12:30.776 fused_ordering(348) 00:12:30.776 fused_ordering(349) 00:12:30.776 fused_ordering(350) 00:12:30.776 fused_ordering(351) 00:12:30.776 fused_ordering(352) 00:12:30.776 fused_ordering(353) 00:12:30.776 fused_ordering(354) 00:12:30.776 fused_ordering(355) 00:12:30.776 fused_ordering(356) 00:12:30.776 fused_ordering(357) 00:12:30.776 fused_ordering(358) 00:12:30.776 fused_ordering(359) 00:12:30.776 fused_ordering(360) 00:12:30.776 fused_ordering(361) 00:12:30.776 fused_ordering(362) 00:12:30.776 fused_ordering(363) 00:12:30.776 fused_ordering(364) 00:12:30.776 fused_ordering(365) 00:12:30.776 fused_ordering(366) 00:12:30.776 fused_ordering(367) 00:12:30.776 fused_ordering(368) 00:12:30.776 fused_ordering(369) 00:12:30.776 fused_ordering(370) 00:12:30.776 fused_ordering(371) 00:12:30.776 fused_ordering(372) 00:12:30.776 fused_ordering(373) 00:12:30.776 fused_ordering(374) 00:12:30.776 fused_ordering(375) 00:12:30.776 fused_ordering(376) 00:12:30.776 fused_ordering(377) 00:12:30.776 fused_ordering(378) 00:12:30.776 fused_ordering(379) 00:12:30.776 fused_ordering(380) 00:12:30.776 fused_ordering(381) 00:12:30.776 fused_ordering(382) 00:12:30.776 fused_ordering(383) 00:12:30.776 fused_ordering(384) 00:12:30.776 fused_ordering(385) 00:12:30.776 fused_ordering(386) 00:12:30.776 fused_ordering(387) 00:12:30.776 fused_ordering(388) 00:12:30.776 fused_ordering(389) 00:12:30.776 fused_ordering(390) 00:12:30.776 fused_ordering(391) 00:12:30.776 fused_ordering(392) 00:12:30.776 fused_ordering(393) 00:12:30.776 fused_ordering(394) 00:12:30.776 fused_ordering(395) 00:12:30.776 fused_ordering(396) 00:12:30.776 fused_ordering(397) 00:12:30.776 fused_ordering(398) 00:12:30.776 fused_ordering(399) 00:12:30.776 fused_ordering(400) 00:12:30.776 fused_ordering(401) 00:12:30.776 fused_ordering(402) 00:12:30.776 fused_ordering(403) 00:12:30.776 fused_ordering(404) 00:12:30.776 fused_ordering(405) 00:12:30.776 fused_ordering(406) 00:12:30.776 fused_ordering(407) 00:12:30.776 fused_ordering(408) 00:12:30.776 fused_ordering(409) 00:12:30.776 fused_ordering(410) 00:12:31.034 fused_ordering(411) 00:12:31.034 fused_ordering(412) 00:12:31.034 fused_ordering(413) 00:12:31.034 fused_ordering(414) 00:12:31.034 fused_ordering(415) 00:12:31.034 fused_ordering(416) 00:12:31.034 fused_ordering(417) 00:12:31.034 fused_ordering(418) 00:12:31.034 fused_ordering(419) 00:12:31.034 fused_ordering(420) 00:12:31.034 fused_ordering(421) 00:12:31.034 fused_ordering(422) 00:12:31.034 fused_ordering(423) 00:12:31.034 fused_ordering(424) 00:12:31.034 fused_ordering(425) 00:12:31.034 fused_ordering(426) 00:12:31.034 fused_ordering(427) 00:12:31.034 fused_ordering(428) 00:12:31.034 fused_ordering(429) 00:12:31.034 fused_ordering(430) 00:12:31.034 fused_ordering(431) 00:12:31.034 fused_ordering(432) 00:12:31.034 fused_ordering(433) 00:12:31.034 fused_ordering(434) 00:12:31.034 fused_ordering(435) 00:12:31.034 fused_ordering(436) 00:12:31.034 fused_ordering(437) 00:12:31.034 fused_ordering(438) 00:12:31.034 fused_ordering(439) 00:12:31.034 fused_ordering(440) 00:12:31.034 fused_ordering(441) 00:12:31.034 fused_ordering(442) 00:12:31.034 fused_ordering(443) 00:12:31.034 fused_ordering(444) 00:12:31.034 fused_ordering(445) 00:12:31.034 fused_ordering(446) 00:12:31.034 fused_ordering(447) 00:12:31.034 fused_ordering(448) 00:12:31.034 fused_ordering(449) 00:12:31.034 fused_ordering(450) 00:12:31.034 fused_ordering(451) 00:12:31.034 fused_ordering(452) 00:12:31.034 fused_ordering(453) 00:12:31.034 fused_ordering(454) 00:12:31.034 fused_ordering(455) 00:12:31.034 fused_ordering(456) 00:12:31.034 fused_ordering(457) 00:12:31.034 fused_ordering(458) 00:12:31.034 fused_ordering(459) 00:12:31.034 fused_ordering(460) 00:12:31.034 fused_ordering(461) 00:12:31.034 fused_ordering(462) 00:12:31.034 fused_ordering(463) 00:12:31.034 fused_ordering(464) 00:12:31.034 fused_ordering(465) 00:12:31.034 fused_ordering(466) 00:12:31.034 fused_ordering(467) 00:12:31.034 fused_ordering(468) 00:12:31.034 fused_ordering(469) 00:12:31.034 fused_ordering(470) 00:12:31.034 fused_ordering(471) 00:12:31.034 fused_ordering(472) 00:12:31.034 fused_ordering(473) 00:12:31.034 fused_ordering(474) 00:12:31.034 fused_ordering(475) 00:12:31.034 fused_ordering(476) 00:12:31.034 fused_ordering(477) 00:12:31.034 fused_ordering(478) 00:12:31.034 fused_ordering(479) 00:12:31.034 fused_ordering(480) 00:12:31.034 fused_ordering(481) 00:12:31.034 fused_ordering(482) 00:12:31.034 fused_ordering(483) 00:12:31.034 fused_ordering(484) 00:12:31.034 fused_ordering(485) 00:12:31.034 fused_ordering(486) 00:12:31.034 fused_ordering(487) 00:12:31.034 fused_ordering(488) 00:12:31.034 fused_ordering(489) 00:12:31.034 fused_ordering(490) 00:12:31.034 fused_ordering(491) 00:12:31.034 fused_ordering(492) 00:12:31.034 fused_ordering(493) 00:12:31.034 fused_ordering(494) 00:12:31.034 fused_ordering(495) 00:12:31.034 fused_ordering(496) 00:12:31.034 fused_ordering(497) 00:12:31.034 fused_ordering(498) 00:12:31.034 fused_ordering(499) 00:12:31.034 fused_ordering(500) 00:12:31.034 fused_ordering(501) 00:12:31.034 fused_ordering(502) 00:12:31.034 fused_ordering(503) 00:12:31.034 fused_ordering(504) 00:12:31.034 fused_ordering(505) 00:12:31.034 fused_ordering(506) 00:12:31.034 fused_ordering(507) 00:12:31.034 fused_ordering(508) 00:12:31.034 fused_ordering(509) 00:12:31.034 fused_ordering(510) 00:12:31.034 fused_ordering(511) 00:12:31.034 fused_ordering(512) 00:12:31.035 fused_ordering(513) 00:12:31.035 fused_ordering(514) 00:12:31.035 fused_ordering(515) 00:12:31.035 fused_ordering(516) 00:12:31.035 fused_ordering(517) 00:12:31.035 fused_ordering(518) 00:12:31.035 fused_ordering(519) 00:12:31.035 fused_ordering(520) 00:12:31.035 fused_ordering(521) 00:12:31.035 fused_ordering(522) 00:12:31.035 fused_ordering(523) 00:12:31.035 fused_ordering(524) 00:12:31.035 fused_ordering(525) 00:12:31.035 fused_ordering(526) 00:12:31.035 fused_ordering(527) 00:12:31.035 fused_ordering(528) 00:12:31.035 fused_ordering(529) 00:12:31.035 fused_ordering(530) 00:12:31.035 fused_ordering(531) 00:12:31.035 fused_ordering(532) 00:12:31.035 fused_ordering(533) 00:12:31.035 fused_ordering(534) 00:12:31.035 fused_ordering(535) 00:12:31.035 fused_ordering(536) 00:12:31.035 fused_ordering(537) 00:12:31.035 fused_ordering(538) 00:12:31.035 fused_ordering(539) 00:12:31.035 fused_ordering(540) 00:12:31.035 fused_ordering(541) 00:12:31.035 fused_ordering(542) 00:12:31.035 fused_ordering(543) 00:12:31.035 fused_ordering(544) 00:12:31.035 fused_ordering(545) 00:12:31.035 fused_ordering(546) 00:12:31.035 fused_ordering(547) 00:12:31.035 fused_ordering(548) 00:12:31.035 fused_ordering(549) 00:12:31.035 fused_ordering(550) 00:12:31.035 fused_ordering(551) 00:12:31.035 fused_ordering(552) 00:12:31.035 fused_ordering(553) 00:12:31.035 fused_ordering(554) 00:12:31.035 fused_ordering(555) 00:12:31.035 fused_ordering(556) 00:12:31.035 fused_ordering(557) 00:12:31.035 fused_ordering(558) 00:12:31.035 fused_ordering(559) 00:12:31.035 fused_ordering(560) 00:12:31.035 fused_ordering(561) 00:12:31.035 fused_ordering(562) 00:12:31.035 fused_ordering(563) 00:12:31.035 fused_ordering(564) 00:12:31.035 fused_ordering(565) 00:12:31.035 fused_ordering(566) 00:12:31.035 fused_ordering(567) 00:12:31.035 fused_ordering(568) 00:12:31.035 fused_ordering(569) 00:12:31.035 fused_ordering(570) 00:12:31.035 fused_ordering(571) 00:12:31.035 fused_ordering(572) 00:12:31.035 fused_ordering(573) 00:12:31.035 fused_ordering(574) 00:12:31.035 fused_ordering(575) 00:12:31.035 fused_ordering(576) 00:12:31.035 fused_ordering(577) 00:12:31.035 fused_ordering(578) 00:12:31.035 fused_ordering(579) 00:12:31.035 fused_ordering(580) 00:12:31.035 fused_ordering(581) 00:12:31.035 fused_ordering(582) 00:12:31.035 fused_ordering(583) 00:12:31.035 fused_ordering(584) 00:12:31.035 fused_ordering(585) 00:12:31.035 fused_ordering(586) 00:12:31.035 fused_ordering(587) 00:12:31.035 fused_ordering(588) 00:12:31.035 fused_ordering(589) 00:12:31.035 fused_ordering(590) 00:12:31.035 fused_ordering(591) 00:12:31.035 fused_ordering(592) 00:12:31.035 fused_ordering(593) 00:12:31.035 fused_ordering(594) 00:12:31.035 fused_ordering(595) 00:12:31.035 fused_ordering(596) 00:12:31.035 fused_ordering(597) 00:12:31.035 fused_ordering(598) 00:12:31.035 fused_ordering(599) 00:12:31.035 fused_ordering(600) 00:12:31.035 fused_ordering(601) 00:12:31.035 fused_ordering(602) 00:12:31.035 fused_ordering(603) 00:12:31.035 fused_ordering(604) 00:12:31.035 fused_ordering(605) 00:12:31.035 fused_ordering(606) 00:12:31.035 fused_ordering(607) 00:12:31.035 fused_ordering(608) 00:12:31.035 fused_ordering(609) 00:12:31.035 fused_ordering(610) 00:12:31.035 fused_ordering(611) 00:12:31.035 fused_ordering(612) 00:12:31.035 fused_ordering(613) 00:12:31.035 fused_ordering(614) 00:12:31.035 fused_ordering(615) 00:12:31.293 fused_ordering(616) 00:12:31.293 fused_ordering(617) 00:12:31.293 fused_ordering(618) 00:12:31.293 fused_ordering(619) 00:12:31.293 fused_ordering(620) 00:12:31.293 fused_ordering(621) 00:12:31.293 fused_ordering(622) 00:12:31.293 fused_ordering(623) 00:12:31.293 fused_ordering(624) 00:12:31.293 fused_ordering(625) 00:12:31.293 fused_ordering(626) 00:12:31.293 fused_ordering(627) 00:12:31.293 fused_ordering(628) 00:12:31.293 fused_ordering(629) 00:12:31.293 fused_ordering(630) 00:12:31.293 fused_ordering(631) 00:12:31.293 fused_ordering(632) 00:12:31.293 fused_ordering(633) 00:12:31.293 fused_ordering(634) 00:12:31.293 fused_ordering(635) 00:12:31.293 fused_ordering(636) 00:12:31.293 fused_ordering(637) 00:12:31.293 fused_ordering(638) 00:12:31.293 fused_ordering(639) 00:12:31.293 fused_ordering(640) 00:12:31.293 fused_ordering(641) 00:12:31.293 fused_ordering(642) 00:12:31.293 fused_ordering(643) 00:12:31.293 fused_ordering(644) 00:12:31.293 fused_ordering(645) 00:12:31.293 fused_ordering(646) 00:12:31.293 fused_ordering(647) 00:12:31.293 fused_ordering(648) 00:12:31.293 fused_ordering(649) 00:12:31.293 fused_ordering(650) 00:12:31.293 fused_ordering(651) 00:12:31.293 fused_ordering(652) 00:12:31.293 fused_ordering(653) 00:12:31.293 fused_ordering(654) 00:12:31.293 fused_ordering(655) 00:12:31.293 fused_ordering(656) 00:12:31.293 fused_ordering(657) 00:12:31.293 fused_ordering(658) 00:12:31.293 fused_ordering(659) 00:12:31.293 fused_ordering(660) 00:12:31.293 fused_ordering(661) 00:12:31.293 fused_ordering(662) 00:12:31.293 fused_ordering(663) 00:12:31.293 fused_ordering(664) 00:12:31.293 fused_ordering(665) 00:12:31.293 fused_ordering(666) 00:12:31.293 fused_ordering(667) 00:12:31.293 fused_ordering(668) 00:12:31.293 fused_ordering(669) 00:12:31.293 fused_ordering(670) 00:12:31.293 fused_ordering(671) 00:12:31.293 fused_ordering(672) 00:12:31.293 fused_ordering(673) 00:12:31.293 fused_ordering(674) 00:12:31.293 fused_ordering(675) 00:12:31.293 fused_ordering(676) 00:12:31.293 fused_ordering(677) 00:12:31.293 fused_ordering(678) 00:12:31.293 fused_ordering(679) 00:12:31.293 fused_ordering(680) 00:12:31.293 fused_ordering(681) 00:12:31.293 fused_ordering(682) 00:12:31.293 fused_ordering(683) 00:12:31.293 fused_ordering(684) 00:12:31.293 fused_ordering(685) 00:12:31.293 fused_ordering(686) 00:12:31.293 fused_ordering(687) 00:12:31.293 fused_ordering(688) 00:12:31.293 fused_ordering(689) 00:12:31.293 fused_ordering(690) 00:12:31.293 fused_ordering(691) 00:12:31.293 fused_ordering(692) 00:12:31.293 fused_ordering(693) 00:12:31.293 fused_ordering(694) 00:12:31.293 fused_ordering(695) 00:12:31.293 fused_ordering(696) 00:12:31.293 fused_ordering(697) 00:12:31.293 fused_ordering(698) 00:12:31.293 fused_ordering(699) 00:12:31.293 fused_ordering(700) 00:12:31.293 fused_ordering(701) 00:12:31.293 fused_ordering(702) 00:12:31.293 fused_ordering(703) 00:12:31.293 fused_ordering(704) 00:12:31.293 fused_ordering(705) 00:12:31.293 fused_ordering(706) 00:12:31.293 fused_ordering(707) 00:12:31.293 fused_ordering(708) 00:12:31.293 fused_ordering(709) 00:12:31.293 fused_ordering(710) 00:12:31.293 fused_ordering(711) 00:12:31.293 fused_ordering(712) 00:12:31.293 fused_ordering(713) 00:12:31.293 fused_ordering(714) 00:12:31.293 fused_ordering(715) 00:12:31.293 fused_ordering(716) 00:12:31.293 fused_ordering(717) 00:12:31.293 fused_ordering(718) 00:12:31.293 fused_ordering(719) 00:12:31.293 fused_ordering(720) 00:12:31.293 fused_ordering(721) 00:12:31.293 fused_ordering(722) 00:12:31.293 fused_ordering(723) 00:12:31.293 fused_ordering(724) 00:12:31.293 fused_ordering(725) 00:12:31.293 fused_ordering(726) 00:12:31.293 fused_ordering(727) 00:12:31.293 fused_ordering(728) 00:12:31.293 fused_ordering(729) 00:12:31.293 fused_ordering(730) 00:12:31.293 fused_ordering(731) 00:12:31.293 fused_ordering(732) 00:12:31.293 fused_ordering(733) 00:12:31.293 fused_ordering(734) 00:12:31.293 fused_ordering(735) 00:12:31.293 fused_ordering(736) 00:12:31.293 fused_ordering(737) 00:12:31.293 fused_ordering(738) 00:12:31.293 fused_ordering(739) 00:12:31.293 fused_ordering(740) 00:12:31.293 fused_ordering(741) 00:12:31.293 fused_ordering(742) 00:12:31.293 fused_ordering(743) 00:12:31.293 fused_ordering(744) 00:12:31.293 fused_ordering(745) 00:12:31.293 fused_ordering(746) 00:12:31.293 fused_ordering(747) 00:12:31.293 fused_ordering(748) 00:12:31.293 fused_ordering(749) 00:12:31.293 fused_ordering(750) 00:12:31.293 fused_ordering(751) 00:12:31.293 fused_ordering(752) 00:12:31.293 fused_ordering(753) 00:12:31.293 fused_ordering(754) 00:12:31.293 fused_ordering(755) 00:12:31.293 fused_ordering(756) 00:12:31.293 fused_ordering(757) 00:12:31.293 fused_ordering(758) 00:12:31.293 fused_ordering(759) 00:12:31.293 fused_ordering(760) 00:12:31.293 fused_ordering(761) 00:12:31.293 fused_ordering(762) 00:12:31.293 fused_ordering(763) 00:12:31.293 fused_ordering(764) 00:12:31.293 fused_ordering(765) 00:12:31.293 fused_ordering(766) 00:12:31.293 fused_ordering(767) 00:12:31.293 fused_ordering(768) 00:12:31.293 fused_ordering(769) 00:12:31.293 fused_ordering(770) 00:12:31.293 fused_ordering(771) 00:12:31.293 fused_ordering(772) 00:12:31.293 fused_ordering(773) 00:12:31.294 fused_ordering(774) 00:12:31.294 fused_ordering(775) 00:12:31.294 fused_ordering(776) 00:12:31.294 fused_ordering(777) 00:12:31.294 fused_ordering(778) 00:12:31.294 fused_ordering(779) 00:12:31.294 fused_ordering(780) 00:12:31.294 fused_ordering(781) 00:12:31.294 fused_ordering(782) 00:12:31.294 fused_ordering(783) 00:12:31.294 fused_ordering(784) 00:12:31.294 fused_ordering(785) 00:12:31.294 fused_ordering(786) 00:12:31.294 fused_ordering(787) 00:12:31.294 fused_ordering(788) 00:12:31.294 fused_ordering(789) 00:12:31.294 fused_ordering(790) 00:12:31.294 fused_ordering(791) 00:12:31.294 fused_ordering(792) 00:12:31.294 fused_ordering(793) 00:12:31.294 fused_ordering(794) 00:12:31.294 fused_ordering(795) 00:12:31.294 fused_ordering(796) 00:12:31.294 fused_ordering(797) 00:12:31.294 fused_ordering(798) 00:12:31.294 fused_ordering(799) 00:12:31.294 fused_ordering(800) 00:12:31.294 fused_ordering(801) 00:12:31.294 fused_ordering(802) 00:12:31.294 fused_ordering(803) 00:12:31.294 fused_ordering(804) 00:12:31.294 fused_ordering(805) 00:12:31.294 fused_ordering(806) 00:12:31.294 fused_ordering(807) 00:12:31.294 fused_ordering(808) 00:12:31.294 fused_ordering(809) 00:12:31.294 fused_ordering(810) 00:12:31.294 fused_ordering(811) 00:12:31.294 fused_ordering(812) 00:12:31.294 fused_ordering(813) 00:12:31.294 fused_ordering(814) 00:12:31.294 fused_ordering(815) 00:12:31.294 fused_ordering(816) 00:12:31.294 fused_ordering(817) 00:12:31.294 fused_ordering(818) 00:12:31.294 fused_ordering(819) 00:12:31.294 fused_ordering(820) 00:12:31.860 fused_o[2024-11-20 18:49:54.066743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857f00 is same with the state(6) to be set 00:12:31.860 rdering(821) 00:12:31.860 fused_ordering(822) 00:12:31.860 fused_ordering(823) 00:12:31.860 fused_ordering(824) 00:12:31.860 fused_ordering(825) 00:12:31.860 fused_ordering(826) 00:12:31.860 fused_ordering(827) 00:12:31.860 fused_ordering(828) 00:12:31.860 fused_ordering(829) 00:12:31.860 fused_ordering(830) 00:12:31.860 fused_ordering(831) 00:12:31.860 fused_ordering(832) 00:12:31.860 fused_ordering(833) 00:12:31.860 fused_ordering(834) 00:12:31.860 fused_ordering(835) 00:12:31.860 fused_ordering(836) 00:12:31.860 fused_ordering(837) 00:12:31.860 fused_ordering(838) 00:12:31.860 fused_ordering(839) 00:12:31.860 fused_ordering(840) 00:12:31.860 fused_ordering(841) 00:12:31.860 fused_ordering(842) 00:12:31.860 fused_ordering(843) 00:12:31.860 fused_ordering(844) 00:12:31.860 fused_ordering(845) 00:12:31.860 fused_ordering(846) 00:12:31.860 fused_ordering(847) 00:12:31.860 fused_ordering(848) 00:12:31.860 fused_ordering(849) 00:12:31.860 fused_ordering(850) 00:12:31.860 fused_ordering(851) 00:12:31.860 fused_ordering(852) 00:12:31.860 fused_ordering(853) 00:12:31.860 fused_ordering(854) 00:12:31.860 fused_ordering(855) 00:12:31.860 fused_ordering(856) 00:12:31.860 fused_ordering(857) 00:12:31.860 fused_ordering(858) 00:12:31.860 fused_ordering(859) 00:12:31.860 fused_ordering(860) 00:12:31.860 fused_ordering(861) 00:12:31.860 fused_ordering(862) 00:12:31.860 fused_ordering(863) 00:12:31.860 fused_ordering(864) 00:12:31.860 fused_ordering(865) 00:12:31.860 fused_ordering(866) 00:12:31.860 fused_ordering(867) 00:12:31.860 fused_ordering(868) 00:12:31.860 fused_ordering(869) 00:12:31.860 fused_ordering(870) 00:12:31.860 fused_ordering(871) 00:12:31.860 fused_ordering(872) 00:12:31.860 fused_ordering(873) 00:12:31.860 fused_ordering(874) 00:12:31.860 fused_ordering(875) 00:12:31.860 fused_ordering(876) 00:12:31.860 fused_ordering(877) 00:12:31.860 fused_ordering(878) 00:12:31.860 fused_ordering(879) 00:12:31.860 fused_ordering(880) 00:12:31.860 fused_ordering(881) 00:12:31.860 fused_ordering(882) 00:12:31.860 fused_ordering(883) 00:12:31.860 fused_ordering(884) 00:12:31.860 fused_ordering(885) 00:12:31.860 fused_ordering(886) 00:12:31.860 fused_ordering(887) 00:12:31.860 fused_ordering(888) 00:12:31.860 fused_ordering(889) 00:12:31.860 fused_ordering(890) 00:12:31.860 fused_ordering(891) 00:12:31.860 fused_ordering(892) 00:12:31.860 fused_ordering(893) 00:12:31.860 fused_ordering(894) 00:12:31.860 fused_ordering(895) 00:12:31.860 fused_ordering(896) 00:12:31.860 fused_ordering(897) 00:12:31.860 fused_ordering(898) 00:12:31.860 fused_ordering(899) 00:12:31.860 fused_ordering(900) 00:12:31.860 fused_ordering(901) 00:12:31.860 fused_ordering(902) 00:12:31.860 fused_ordering(903) 00:12:31.860 fused_ordering(904) 00:12:31.860 fused_ordering(905) 00:12:31.860 fused_ordering(906) 00:12:31.860 fused_ordering(907) 00:12:31.860 fused_ordering(908) 00:12:31.860 fused_ordering(909) 00:12:31.860 fused_ordering(910) 00:12:31.860 fused_ordering(911) 00:12:31.860 fused_ordering(912) 00:12:31.860 fused_ordering(913) 00:12:31.860 fused_ordering(914) 00:12:31.860 fused_ordering(915) 00:12:31.860 fused_ordering(916) 00:12:31.860 fused_ordering(917) 00:12:31.860 fused_ordering(918) 00:12:31.860 fused_ordering(919) 00:12:31.860 fused_ordering(920) 00:12:31.860 fused_ordering(921) 00:12:31.860 fused_ordering(922) 00:12:31.860 fused_ordering(923) 00:12:31.860 fused_ordering(924) 00:12:31.860 fused_ordering(925) 00:12:31.860 fused_ordering(926) 00:12:31.860 fused_ordering(927) 00:12:31.860 fused_ordering(928) 00:12:31.860 fused_ordering(929) 00:12:31.860 fused_ordering(930) 00:12:31.860 fused_ordering(931) 00:12:31.860 fused_ordering(932) 00:12:31.860 fused_ordering(933) 00:12:31.860 fused_ordering(934) 00:12:31.860 fused_ordering(935) 00:12:31.860 fused_ordering(936) 00:12:31.860 fused_ordering(937) 00:12:31.860 fused_ordering(938) 00:12:31.860 fused_ordering(939) 00:12:31.860 fused_ordering(940) 00:12:31.860 fused_ordering(941) 00:12:31.860 fused_ordering(942) 00:12:31.860 fused_ordering(943) 00:12:31.860 fused_ordering(944) 00:12:31.860 fused_ordering(945) 00:12:31.860 fused_ordering(946) 00:12:31.860 fused_ordering(947) 00:12:31.860 fused_ordering(948) 00:12:31.860 fused_ordering(949) 00:12:31.860 fused_ordering(950) 00:12:31.860 fused_ordering(951) 00:12:31.860 fused_ordering(952) 00:12:31.860 fused_ordering(953) 00:12:31.860 fused_ordering(954) 00:12:31.860 fused_ordering(955) 00:12:31.860 fused_ordering(956) 00:12:31.860 fused_ordering(957) 00:12:31.860 fused_ordering(958) 00:12:31.860 fused_ordering(959) 00:12:31.860 fused_ordering(960) 00:12:31.860 fused_ordering(961) 00:12:31.860 fused_ordering(962) 00:12:31.860 fused_ordering(963) 00:12:31.860 fused_ordering(964) 00:12:31.860 fused_ordering(965) 00:12:31.860 fused_ordering(966) 00:12:31.860 fused_ordering(967) 00:12:31.860 fused_ordering(968) 00:12:31.860 fused_ordering(969) 00:12:31.860 fused_ordering(970) 00:12:31.860 fused_ordering(971) 00:12:31.860 fused_ordering(972) 00:12:31.860 fused_ordering(973) 00:12:31.860 fused_ordering(974) 00:12:31.860 fused_ordering(975) 00:12:31.860 fused_ordering(976) 00:12:31.860 fused_ordering(977) 00:12:31.860 fused_ordering(978) 00:12:31.860 fused_ordering(979) 00:12:31.860 fused_ordering(980) 00:12:31.860 fused_ordering(981) 00:12:31.860 fused_ordering(982) 00:12:31.860 fused_ordering(983) 00:12:31.860 fused_ordering(984) 00:12:31.860 fused_ordering(985) 00:12:31.860 fused_ordering(986) 00:12:31.860 fused_ordering(987) 00:12:31.860 fused_ordering(988) 00:12:31.860 fused_ordering(989) 00:12:31.860 fused_ordering(990) 00:12:31.860 fused_ordering(991) 00:12:31.860 fused_ordering(992) 00:12:31.860 fused_ordering(993) 00:12:31.860 fused_ordering(994) 00:12:31.860 fused_ordering(995) 00:12:31.860 fused_ordering(996) 00:12:31.860 fused_ordering(997) 00:12:31.860 fused_ordering(998) 00:12:31.860 fused_ordering(999) 00:12:31.860 fused_ordering(1000) 00:12:31.860 fused_ordering(1001) 00:12:31.860 fused_ordering(1002) 00:12:31.860 fused_ordering(1003) 00:12:31.860 fused_ordering(1004) 00:12:31.860 fused_ordering(1005) 00:12:31.860 fused_ordering(1006) 00:12:31.861 fused_ordering(1007) 00:12:31.861 fused_ordering(1008) 00:12:31.861 fused_ordering(1009) 00:12:31.861 fused_ordering(1010) 00:12:31.861 fused_ordering(1011) 00:12:31.861 fused_ordering(1012) 00:12:31.861 fused_ordering(1013) 00:12:31.861 fused_ordering(1014) 00:12:31.861 fused_ordering(1015) 00:12:31.861 fused_ordering(1016) 00:12:31.861 fused_ordering(1017) 00:12:31.861 fused_ordering(1018) 00:12:31.861 fused_ordering(1019) 00:12:31.861 fused_ordering(1020) 00:12:31.861 fused_ordering(1021) 00:12:31.861 fused_ordering(1022) 00:12:31.861 fused_ordering(1023) 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.861 rmmod nvme_tcp 00:12:31.861 rmmod nvme_fabrics 00:12:31.861 rmmod nvme_keyring 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3592536 ']' 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3592536 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3592536 ']' 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3592536 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.861 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3592536 00:12:32.119 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:32.119 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3592536' 00:12:32.120 killing process with pid 3592536 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3592536 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3592536 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.120 18:49:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:34.654 00:12:34.654 real 0m10.712s 00:12:34.654 user 0m4.836s 00:12:34.654 sys 0m5.943s 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:34.654 ************************************ 00:12:34.654 END TEST nvmf_fused_ordering 00:12:34.654 ************************************ 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.654 ************************************ 00:12:34.654 START TEST nvmf_ns_masking 00:12:34.654 ************************************ 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:34.654 * Looking for test storage... 00:12:34.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.654 --rc genhtml_branch_coverage=1 00:12:34.654 --rc genhtml_function_coverage=1 00:12:34.654 --rc genhtml_legend=1 00:12:34.654 --rc geninfo_all_blocks=1 00:12:34.654 --rc geninfo_unexecuted_blocks=1 00:12:34.654 00:12:34.654 ' 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.654 --rc genhtml_branch_coverage=1 00:12:34.654 --rc genhtml_function_coverage=1 00:12:34.654 --rc genhtml_legend=1 00:12:34.654 --rc geninfo_all_blocks=1 00:12:34.654 --rc geninfo_unexecuted_blocks=1 00:12:34.654 00:12:34.654 ' 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.654 --rc genhtml_branch_coverage=1 00:12:34.654 --rc genhtml_function_coverage=1 00:12:34.654 --rc genhtml_legend=1 00:12:34.654 --rc geninfo_all_blocks=1 00:12:34.654 --rc geninfo_unexecuted_blocks=1 00:12:34.654 00:12:34.654 ' 00:12:34.654 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.654 --rc genhtml_branch_coverage=1 00:12:34.654 --rc genhtml_function_coverage=1 00:12:34.654 --rc genhtml_legend=1 00:12:34.654 --rc geninfo_all_blocks=1 00:12:34.654 --rc geninfo_unexecuted_blocks=1 00:12:34.654 00:12:34.654 ' 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a52ebb45-d0c7-4b37-909a-3a557e95de37 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c559fd61-a142-4f84-a8ab-b51aa8be45b3 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ae14dbcc-8b91-4ea3-a38b-71097975e281 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.655 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.656 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.656 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.656 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.656 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.656 18:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:41.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:41.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:41.225 Found net devices under 0000:86:00.0: cvl_0_0 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:41.225 Found net devices under 0000:86:00.1: cvl_0_1 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.225 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:12:41.226 00:12:41.226 --- 10.0.0.2 ping statistics --- 00:12:41.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.226 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:12:41.226 00:12:41.226 --- 10.0.0.1 ping statistics --- 00:12:41.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.226 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3596544 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3596544 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3596544 ']' 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:41.226 [2024-11-20 18:50:02.772819] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:12:41.226 [2024-11-20 18:50:02.772860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.226 [2024-11-20 18:50:02.851442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.226 [2024-11-20 18:50:02.891661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.226 [2024-11-20 18:50:02.891700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.226 [2024-11-20 18:50:02.891707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.226 [2024-11-20 18:50:02.891716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.226 [2024-11-20 18:50:02.891721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.226 [2024-11-20 18:50:02.892298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.226 18:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:41.226 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.226 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:41.226 [2024-11-20 18:50:03.192826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.226 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:41.226 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:41.226 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:41.226 Malloc1 00:12:41.226 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:41.484 Malloc2 00:12:41.484 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.742 18:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:42.000 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.000 [2024-11-20 18:50:04.231405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.000 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:42.000 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ae14dbcc-8b91-4ea3-a38b-71097975e281 -a 10.0.0.2 -s 4420 -i 4 00:12:42.258 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.258 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:42.258 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.258 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:42.258 18:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:44.156 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:44.156 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:44.156 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.413 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:44.413 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.413 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:44.413 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:44.413 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:44.413 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.414 [ 0]:0x1 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f19849b6e3274654a4d37f28b879e927 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f19849b6e3274654a4d37f28b879e927 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.414 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:44.671 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:44.671 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.671 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.671 [ 0]:0x1 00:12:44.671 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.671 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f19849b6e3274654a4d37f28b879e927 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f19849b6e3274654a4d37f28b879e927 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:44.672 [ 1]:0x2 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e7dba1f31e8648dfa5b9f144dc3ba0e4 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e7dba1f31e8648dfa5b9f144dc3ba0e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:44.672 18:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.929 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.929 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:45.186 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:45.186 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ae14dbcc-8b91-4ea3-a38b-71097975e281 -a 10.0.0.2 -s 4420 -i 4 00:12:45.443 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:45.443 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:45.443 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.443 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:45.443 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:45.443 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:47.342 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:47.343 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.601 [ 0]:0x2 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e7dba1f31e8648dfa5b9f144dc3ba0e4 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e7dba1f31e8648dfa5b9f144dc3ba0e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.601 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.859 [ 0]:0x1 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f19849b6e3274654a4d37f28b879e927 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f19849b6e3274654a4d37f28b879e927 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.859 [ 1]:0x2 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.859 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.859 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e7dba1f31e8648dfa5b9f144dc3ba0e4 00:12:47.859 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e7dba1f31e8648dfa5b9f144dc3ba0e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.859 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:48.118 [ 0]:0x2 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e7dba1f31e8648dfa5b9f144dc3ba0e4 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e7dba1f31e8648dfa5b9f144dc3ba0e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.118 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:48.376 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:48.376 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ae14dbcc-8b91-4ea3-a38b-71097975e281 -a 10.0.0.2 -s 4420 -i 4 00:12:48.634 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:48.634 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:48.634 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.634 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:48.635 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:48.635 18:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:50.534 [ 0]:0x1 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:50.534 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f19849b6e3274654a4d37f28b879e927 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f19849b6e3274654a4d37f28b879e927 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:50.792 [ 1]:0x2 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e7dba1f31e8648dfa5b9f144dc3ba0e4 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e7dba1f31e8648dfa5b9f144dc3ba0e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.792 18:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:51.051 [ 0]:0x2 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e7dba1f31e8648dfa5b9f144dc3ba0e4 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e7dba1f31e8648dfa5b9f144dc3ba0e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:51.051 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:51.308 [2024-11-20 18:50:13.437323] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:51.308 request: 00:12:51.308 { 00:12:51.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.308 "nsid": 2, 00:12:51.308 "host": "nqn.2016-06.io.spdk:host1", 00:12:51.308 "method": "nvmf_ns_remove_host", 00:12:51.308 "req_id": 1 00:12:51.308 } 00:12:51.308 Got JSON-RPC error response 00:12:51.308 response: 00:12:51.308 { 00:12:51.308 "code": -32602, 00:12:51.308 "message": "Invalid parameters" 00:12:51.308 } 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.308 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:51.309 [ 0]:0x2 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e7dba1f31e8648dfa5b9f144dc3ba0e4 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e7dba1f31e8648dfa5b9f144dc3ba0e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3598537 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3598537 /var/tmp/host.sock 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3598537 ']' 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:51.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.309 18:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.566 [2024-11-20 18:50:13.665795] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:12:51.566 [2024-11-20 18:50:13.665840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598537 ] 00:12:51.566 [2024-11-20 18:50:13.740009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.566 [2024-11-20 18:50:13.782015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.498 18:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.498 18:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:52.498 18:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.498 18:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:52.755 18:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a52ebb45-d0c7-4b37-909a-3a557e95de37 00:12:52.755 18:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:52.755 18:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A52EBB45D0C74B37909A3A557E95DE37 -i 00:12:53.013 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c559fd61-a142-4f84-a8ab-b51aa8be45b3 00:12:53.013 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:53.013 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C559FD61A1424F84A8ABB51AA8BE45B3 -i 00:12:53.013 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:53.270 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:53.527 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:53.528 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:53.785 nvme0n1 00:12:53.785 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:53.785 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:54.042 nvme1n2 00:12:54.042 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:54.042 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:54.042 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:54.042 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:54.042 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:54.301 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:54.301 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:54.301 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:54.301 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:54.558 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a52ebb45-d0c7-4b37-909a-3a557e95de37 == \a\5\2\e\b\b\4\5\-\d\0\c\7\-\4\b\3\7\-\9\0\9\a\-\3\a\5\5\7\e\9\5\d\e\3\7 ]] 00:12:54.558 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:54.558 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:54.558 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:54.815 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c559fd61-a142-4f84-a8ab-b51aa8be45b3 == \c\5\5\9\f\d\6\1\-\a\1\4\2\-\4\f\8\4\-\a\8\a\b\-\b\5\1\a\a\8\b\e\4\5\b\3 ]] 00:12:54.815 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.815 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid a52ebb45-d0c7-4b37-909a-3a557e95de37 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A52EBB45D0C74B37909A3A557E95DE37 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A52EBB45D0C74B37909A3A557E95DE37 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:55.072 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A52EBB45D0C74B37909A3A557E95DE37 00:12:55.330 [2024-11-20 18:50:17.504561] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:55.330 [2024-11-20 18:50:17.504592] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:55.330 [2024-11-20 18:50:17.504600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.330 request: 00:12:55.330 { 00:12:55.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.330 "namespace": { 00:12:55.330 "bdev_name": "invalid", 00:12:55.330 "nsid": 1, 00:12:55.330 "nguid": "A52EBB45D0C74B37909A3A557E95DE37", 00:12:55.330 "no_auto_visible": false 00:12:55.330 }, 00:12:55.330 "method": "nvmf_subsystem_add_ns", 00:12:55.330 "req_id": 1 00:12:55.330 } 00:12:55.330 Got JSON-RPC error response 00:12:55.330 response: 00:12:55.330 { 00:12:55.330 "code": -32602, 00:12:55.330 "message": "Invalid parameters" 00:12:55.330 } 00:12:55.330 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:55.330 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:55.330 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:55.330 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:55.330 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid a52ebb45-d0c7-4b37-909a-3a557e95de37 00:12:55.330 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:55.330 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A52EBB45D0C74B37909A3A557E95DE37 -i 00:12:55.588 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:57.487 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:57.487 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:57.487 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:57.745 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:57.745 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3598537 00:12:57.745 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3598537 ']' 00:12:57.745 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3598537 00:12:57.745 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:57.745 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.745 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3598537 00:12:57.745 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:57.745 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:57.745 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3598537' 00:12:57.745 killing process with pid 3598537 00:12:57.745 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3598537 00:12:57.745 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3598537 00:12:58.003 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.261 rmmod nvme_tcp 00:12:58.261 rmmod nvme_fabrics 00:12:58.261 rmmod nvme_keyring 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3596544 ']' 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3596544 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3596544 ']' 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3596544 00:12:58.261 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3596544 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3596544' 00:12:58.520 killing process with pid 3596544 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3596544 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3596544 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.520 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.779 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.779 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:58.779 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.779 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.779 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.683 00:13:00.683 real 0m26.401s 00:13:00.683 user 0m32.107s 00:13:00.683 sys 0m7.165s 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:00.683 ************************************ 00:13:00.683 END TEST nvmf_ns_masking 00:13:00.683 ************************************ 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.683 ************************************ 00:13:00.683 START TEST nvmf_nvme_cli 00:13:00.683 ************************************ 00:13:00.683 18:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:00.943 * Looking for test storage... 00:13:00.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:00.943 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.944 --rc genhtml_branch_coverage=1 00:13:00.944 --rc genhtml_function_coverage=1 00:13:00.944 --rc genhtml_legend=1 00:13:00.944 --rc geninfo_all_blocks=1 00:13:00.944 --rc geninfo_unexecuted_blocks=1 00:13:00.944 00:13:00.944 ' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.944 --rc genhtml_branch_coverage=1 00:13:00.944 --rc genhtml_function_coverage=1 00:13:00.944 --rc genhtml_legend=1 00:13:00.944 --rc geninfo_all_blocks=1 00:13:00.944 --rc geninfo_unexecuted_blocks=1 00:13:00.944 00:13:00.944 ' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.944 --rc genhtml_branch_coverage=1 00:13:00.944 --rc genhtml_function_coverage=1 00:13:00.944 --rc genhtml_legend=1 00:13:00.944 --rc geninfo_all_blocks=1 00:13:00.944 --rc geninfo_unexecuted_blocks=1 00:13:00.944 00:13:00.944 ' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.944 --rc genhtml_branch_coverage=1 00:13:00.944 --rc genhtml_function_coverage=1 00:13:00.944 --rc genhtml_legend=1 00:13:00.944 --rc geninfo_all_blocks=1 00:13:00.944 --rc geninfo_unexecuted_blocks=1 00:13:00.944 00:13:00.944 ' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.944 18:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.513 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:07.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:07.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:07.514 Found net devices under 0000:86:00.0: cvl_0_0 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:07.514 Found net devices under 0000:86:00.1: cvl_0_1 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.514 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:07.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:13:07.514 00:13:07.514 --- 10.0.0.2 ping statistics --- 00:13:07.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.514 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:13:07.514 00:13:07.514 --- 10.0.0.1 ping statistics --- 00:13:07.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.514 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.514 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3603256 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3603256 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3603256 ']' 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 [2024-11-20 18:50:29.210917] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:13:07.515 [2024-11-20 18:50:29.210967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.515 [2024-11-20 18:50:29.290252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.515 [2024-11-20 18:50:29.333373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.515 [2024-11-20 18:50:29.333410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.515 [2024-11-20 18:50:29.333417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.515 [2024-11-20 18:50:29.333424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.515 [2024-11-20 18:50:29.333429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.515 [2024-11-20 18:50:29.334963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.515 [2024-11-20 18:50:29.335069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.515 [2024-11-20 18:50:29.335177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.515 [2024-11-20 18:50:29.335178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 [2024-11-20 18:50:29.468149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 Malloc0 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 Malloc1 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 [2024-11-20 18:50:29.564171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:07.515 00:13:07.515 Discovery Log Number of Records 2, Generation counter 2 00:13:07.515 =====Discovery Log Entry 0====== 00:13:07.515 trtype: tcp 00:13:07.515 adrfam: ipv4 00:13:07.515 subtype: current discovery subsystem 00:13:07.515 treq: not required 00:13:07.515 portid: 0 00:13:07.515 trsvcid: 4420 00:13:07.515 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:07.515 traddr: 10.0.0.2 00:13:07.515 eflags: explicit discovery connections, duplicate discovery information 00:13:07.515 sectype: none 00:13:07.515 =====Discovery Log Entry 1====== 00:13:07.515 trtype: tcp 00:13:07.515 adrfam: ipv4 00:13:07.515 subtype: nvme subsystem 00:13:07.515 treq: not required 00:13:07.515 portid: 0 00:13:07.515 trsvcid: 4420 00:13:07.515 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:07.515 traddr: 10.0.0.2 00:13:07.515 eflags: none 00:13:07.515 sectype: none 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:07.515 18:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.884 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:08.884 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.885 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.885 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:08.885 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:08.885 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:10.780 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:11.038 /dev/nvme0n2 ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:11.038 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.296 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.296 rmmod nvme_tcp 00:13:11.296 rmmod nvme_fabrics 00:13:11.554 rmmod nvme_keyring 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3603256 ']' 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3603256 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3603256 ']' 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3603256 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603256 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603256' 00:13:11.554 killing process with pid 3603256 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3603256 00:13:11.554 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3603256 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.813 18:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.719 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.719 00:13:13.719 real 0m13.012s 00:13:13.719 user 0m20.017s 00:13:13.719 sys 0m5.056s 00:13:13.719 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.719 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.719 ************************************ 00:13:13.719 END TEST nvmf_nvme_cli 00:13:13.719 ************************************ 00:13:13.719 18:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:13.719 18:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:13.719 18:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:13.719 18:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.719 18:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.979 ************************************ 00:13:13.979 START TEST nvmf_vfio_user 00:13:13.979 ************************************ 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:13.979 * Looking for test storage... 00:13:13.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.979 --rc genhtml_branch_coverage=1 00:13:13.979 --rc genhtml_function_coverage=1 00:13:13.979 --rc genhtml_legend=1 00:13:13.979 --rc geninfo_all_blocks=1 00:13:13.979 --rc geninfo_unexecuted_blocks=1 00:13:13.979 00:13:13.979 ' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.979 --rc genhtml_branch_coverage=1 00:13:13.979 --rc genhtml_function_coverage=1 00:13:13.979 --rc genhtml_legend=1 00:13:13.979 --rc geninfo_all_blocks=1 00:13:13.979 --rc geninfo_unexecuted_blocks=1 00:13:13.979 00:13:13.979 ' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.979 --rc genhtml_branch_coverage=1 00:13:13.979 --rc genhtml_function_coverage=1 00:13:13.979 --rc genhtml_legend=1 00:13:13.979 --rc geninfo_all_blocks=1 00:13:13.979 --rc geninfo_unexecuted_blocks=1 00:13:13.979 00:13:13.979 ' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.979 --rc genhtml_branch_coverage=1 00:13:13.979 --rc genhtml_function_coverage=1 00:13:13.979 --rc genhtml_legend=1 00:13:13.979 --rc geninfo_all_blocks=1 00:13:13.979 --rc geninfo_unexecuted_blocks=1 00:13:13.979 00:13:13.979 ' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.979 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3604555 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3604555' 00:13:13.980 Process pid: 3604555 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3604555 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3604555 ']' 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.980 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:14.238 [2024-11-20 18:50:36.330093] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:13:14.238 [2024-11-20 18:50:36.330141] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.238 [2024-11-20 18:50:36.403370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.238 [2024-11-20 18:50:36.443055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.238 [2024-11-20 18:50:36.443097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.238 [2024-11-20 18:50:36.443105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.238 [2024-11-20 18:50:36.443111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.238 [2024-11-20 18:50:36.443120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.238 [2024-11-20 18:50:36.444527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.238 [2024-11-20 18:50:36.444637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.238 [2024-11-20 18:50:36.444720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.238 [2024-11-20 18:50:36.444721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.238 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.238 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:14.238 18:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:15.606 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:15.606 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:15.606 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:15.606 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:15.606 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:15.606 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:15.863 Malloc1 00:13:15.863 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:15.863 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:16.168 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:16.447 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:16.447 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:16.447 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:16.724 Malloc2 00:13:16.724 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:16.724 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:17.006 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:17.282 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:17.282 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:17.282 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.283 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:17.283 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:17.283 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:17.283 [2024-11-20 18:50:39.396292] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:13:17.283 [2024-11-20 18:50:39.396326] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3605046 ] 00:13:17.283 [2024-11-20 18:50:39.434677] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:17.283 [2024-11-20 18:50:39.439987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.283 [2024-11-20 18:50:39.440008] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7bbf0a3000 00:13:17.283 [2024-11-20 18:50:39.440985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.441993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.442999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.444003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.445001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.446011] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.447013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.448020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.283 [2024-11-20 18:50:39.449027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.283 [2024-11-20 18:50:39.449035] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7bbf098000 00:13:17.283 [2024-11-20 18:50:39.449949] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:17.283 [2024-11-20 18:50:39.463468] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:17.283 [2024-11-20 18:50:39.463498] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:17.283 [2024-11-20 18:50:39.466118] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:17.283 [2024-11-20 18:50:39.466153] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:17.283 [2024-11-20 18:50:39.466222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:17.283 [2024-11-20 18:50:39.466236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:17.283 [2024-11-20 18:50:39.466242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:17.283 [2024-11-20 18:50:39.467119] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:17.283 [2024-11-20 18:50:39.467130] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:17.283 [2024-11-20 18:50:39.467136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:17.283 [2024-11-20 18:50:39.468125] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:17.283 [2024-11-20 18:50:39.468133] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:17.283 [2024-11-20 18:50:39.468140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:17.283 [2024-11-20 18:50:39.469136] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:17.283 [2024-11-20 18:50:39.469143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:17.283 [2024-11-20 18:50:39.470144] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:17.283 [2024-11-20 18:50:39.470151] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:17.283 [2024-11-20 18:50:39.470155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:17.283 [2024-11-20 18:50:39.470161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:17.283 [2024-11-20 18:50:39.470268] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:17.283 [2024-11-20 18:50:39.470273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:17.283 [2024-11-20 18:50:39.470277] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:17.283 [2024-11-20 18:50:39.471148] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:17.283 [2024-11-20 18:50:39.472153] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:17.283 [2024-11-20 18:50:39.473159] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:17.283 [2024-11-20 18:50:39.474159] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:17.283 [2024-11-20 18:50:39.474238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:17.283 [2024-11-20 18:50:39.475173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:17.283 [2024-11-20 18:50:39.475180] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:17.283 [2024-11-20 18:50:39.475185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:17.283 [2024-11-20 18:50:39.475204] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:17.283 [2024-11-20 18:50:39.475211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:17.283 [2024-11-20 18:50:39.475227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.283 [2024-11-20 18:50:39.475232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.283 [2024-11-20 18:50:39.475235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.283 [2024-11-20 18:50:39.475247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.283 [2024-11-20 18:50:39.475295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:17.283 [2024-11-20 18:50:39.475304] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:17.283 [2024-11-20 18:50:39.475308] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:17.283 [2024-11-20 18:50:39.475312] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:17.283 [2024-11-20 18:50:39.475316] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:17.283 [2024-11-20 18:50:39.475322] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:17.283 [2024-11-20 18:50:39.475326] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:17.283 [2024-11-20 18:50:39.475330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:17.283 [2024-11-20 18:50:39.475338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:17.283 [2024-11-20 18:50:39.475347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:17.283 [2024-11-20 18:50:39.475362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:17.283 [2024-11-20 18:50:39.475372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.283 [2024-11-20 18:50:39.475379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.283 [2024-11-20 18:50:39.475387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.283 [2024-11-20 18:50:39.475394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.283 [2024-11-20 18:50:39.475398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:17.283 [2024-11-20 18:50:39.475404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:17.283 [2024-11-20 18:50:39.475411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:17.283 [2024-11-20 18:50:39.475421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475427] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:17.284 [2024-11-20 18:50:39.475432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475529] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:17.284 [2024-11-20 18:50:39.475533] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:17.284 [2024-11-20 18:50:39.475537] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.284 [2024-11-20 18:50:39.475542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475564] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:17.284 [2024-11-20 18:50:39.475574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475587] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.284 [2024-11-20 18:50:39.475591] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.284 [2024-11-20 18:50:39.475594] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.284 [2024-11-20 18:50:39.475599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475642] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.284 [2024-11-20 18:50:39.475646] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.284 [2024-11-20 18:50:39.475649] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.284 [2024-11-20 18:50:39.475654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475703] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:17.284 [2024-11-20 18:50:39.475707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:17.284 [2024-11-20 18:50:39.475712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:17.284 [2024-11-20 18:50:39.475726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475802] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:17.284 [2024-11-20 18:50:39.475806] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:17.284 [2024-11-20 18:50:39.475809] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:17.284 [2024-11-20 18:50:39.475812] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:17.284 [2024-11-20 18:50:39.475815] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:17.284 [2024-11-20 18:50:39.475821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:17.284 [2024-11-20 18:50:39.475827] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:17.284 [2024-11-20 18:50:39.475831] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:17.284 [2024-11-20 18:50:39.475834] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.284 [2024-11-20 18:50:39.475839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475846] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:17.284 [2024-11-20 18:50:39.475850] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.284 [2024-11-20 18:50:39.475853] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.284 [2024-11-20 18:50:39.475858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475865] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:17.284 [2024-11-20 18:50:39.475868] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:17.284 [2024-11-20 18:50:39.475871] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.284 [2024-11-20 18:50:39.475877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:17.284 [2024-11-20 18:50:39.475883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:17.284 [2024-11-20 18:50:39.475910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:17.284 ===================================================== 00:13:17.284 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:17.284 ===================================================== 00:13:17.284 Controller Capabilities/Features 00:13:17.284 ================================ 00:13:17.284 Vendor ID: 4e58 00:13:17.284 Subsystem Vendor ID: 4e58 00:13:17.284 Serial Number: SPDK1 00:13:17.284 Model Number: SPDK bdev Controller 00:13:17.284 Firmware Version: 25.01 00:13:17.284 Recommended Arb Burst: 6 00:13:17.284 IEEE OUI Identifier: 8d 6b 50 00:13:17.284 Multi-path I/O 00:13:17.284 May have multiple subsystem ports: Yes 00:13:17.284 May have multiple controllers: Yes 00:13:17.284 Associated with SR-IOV VF: No 00:13:17.284 Max Data Transfer Size: 131072 00:13:17.284 Max Number of Namespaces: 32 00:13:17.284 Max Number of I/O Queues: 127 00:13:17.284 NVMe Specification Version (VS): 1.3 00:13:17.284 NVMe Specification Version (Identify): 1.3 00:13:17.284 Maximum Queue Entries: 256 00:13:17.284 Contiguous Queues Required: Yes 00:13:17.284 Arbitration Mechanisms Supported 00:13:17.284 Weighted Round Robin: Not Supported 00:13:17.284 Vendor Specific: Not Supported 00:13:17.284 Reset Timeout: 15000 ms 00:13:17.284 Doorbell Stride: 4 bytes 00:13:17.284 NVM Subsystem Reset: Not Supported 00:13:17.284 Command Sets Supported 00:13:17.284 NVM Command Set: Supported 00:13:17.284 Boot Partition: Not Supported 00:13:17.284 Memory Page Size Minimum: 4096 bytes 00:13:17.284 Memory Page Size Maximum: 4096 bytes 00:13:17.284 Persistent Memory Region: Not Supported 00:13:17.284 Optional Asynchronous Events Supported 00:13:17.284 Namespace Attribute Notices: Supported 00:13:17.285 Firmware Activation Notices: Not Supported 00:13:17.285 ANA Change Notices: Not Supported 00:13:17.285 PLE Aggregate Log Change Notices: Not Supported 00:13:17.285 LBA Status Info Alert Notices: Not Supported 00:13:17.285 EGE Aggregate Log Change Notices: Not Supported 00:13:17.285 Normal NVM Subsystem Shutdown event: Not Supported 00:13:17.285 Zone Descriptor Change Notices: Not Supported 00:13:17.285 Discovery Log Change Notices: Not Supported 00:13:17.285 Controller Attributes 00:13:17.285 128-bit Host Identifier: Supported 00:13:17.285 Non-Operational Permissive Mode: Not Supported 00:13:17.285 NVM Sets: Not Supported 00:13:17.285 Read Recovery Levels: Not Supported 00:13:17.285 Endurance Groups: Not Supported 00:13:17.285 Predictable Latency Mode: Not Supported 00:13:17.285 Traffic Based Keep ALive: Not Supported 00:13:17.285 Namespace Granularity: Not Supported 00:13:17.285 SQ Associations: Not Supported 00:13:17.285 UUID List: Not Supported 00:13:17.285 Multi-Domain Subsystem: Not Supported 00:13:17.285 Fixed Capacity Management: Not Supported 00:13:17.285 Variable Capacity Management: Not Supported 00:13:17.285 Delete Endurance Group: Not Supported 00:13:17.285 Delete NVM Set: Not Supported 00:13:17.285 Extended LBA Formats Supported: Not Supported 00:13:17.285 Flexible Data Placement Supported: Not Supported 00:13:17.285 00:13:17.285 Controller Memory Buffer Support 00:13:17.285 ================================ 00:13:17.285 Supported: No 00:13:17.285 00:13:17.285 Persistent Memory Region Support 00:13:17.285 ================================ 00:13:17.285 Supported: No 00:13:17.285 00:13:17.285 Admin Command Set Attributes 00:13:17.285 ============================ 00:13:17.285 Security Send/Receive: Not Supported 00:13:17.285 Format NVM: Not Supported 00:13:17.285 Firmware Activate/Download: Not Supported 00:13:17.285 Namespace Management: Not Supported 00:13:17.285 Device Self-Test: Not Supported 00:13:17.285 Directives: Not Supported 00:13:17.285 NVMe-MI: Not Supported 00:13:17.285 Virtualization Management: Not Supported 00:13:17.285 Doorbell Buffer Config: Not Supported 00:13:17.285 Get LBA Status Capability: Not Supported 00:13:17.285 Command & Feature Lockdown Capability: Not Supported 00:13:17.285 Abort Command Limit: 4 00:13:17.285 Async Event Request Limit: 4 00:13:17.285 Number of Firmware Slots: N/A 00:13:17.285 Firmware Slot 1 Read-Only: N/A 00:13:17.285 Firmware Activation Without Reset: N/A 00:13:17.285 Multiple Update Detection Support: N/A 00:13:17.285 Firmware Update Granularity: No Information Provided 00:13:17.285 Per-Namespace SMART Log: No 00:13:17.285 Asymmetric Namespace Access Log Page: Not Supported 00:13:17.285 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:17.285 Command Effects Log Page: Supported 00:13:17.285 Get Log Page Extended Data: Supported 00:13:17.285 Telemetry Log Pages: Not Supported 00:13:17.285 Persistent Event Log Pages: Not Supported 00:13:17.285 Supported Log Pages Log Page: May Support 00:13:17.285 Commands Supported & Effects Log Page: Not Supported 00:13:17.285 Feature Identifiers & Effects Log Page:May Support 00:13:17.285 NVMe-MI Commands & Effects Log Page: May Support 00:13:17.285 Data Area 4 for Telemetry Log: Not Supported 00:13:17.285 Error Log Page Entries Supported: 128 00:13:17.285 Keep Alive: Supported 00:13:17.285 Keep Alive Granularity: 10000 ms 00:13:17.285 00:13:17.285 NVM Command Set Attributes 00:13:17.285 ========================== 00:13:17.285 Submission Queue Entry Size 00:13:17.285 Max: 64 00:13:17.285 Min: 64 00:13:17.285 Completion Queue Entry Size 00:13:17.285 Max: 16 00:13:17.285 Min: 16 00:13:17.285 Number of Namespaces: 32 00:13:17.285 Compare Command: Supported 00:13:17.285 Write Uncorrectable Command: Not Supported 00:13:17.285 Dataset Management Command: Supported 00:13:17.285 Write Zeroes Command: Supported 00:13:17.285 Set Features Save Field: Not Supported 00:13:17.285 Reservations: Not Supported 00:13:17.285 Timestamp: Not Supported 00:13:17.285 Copy: Supported 00:13:17.285 Volatile Write Cache: Present 00:13:17.285 Atomic Write Unit (Normal): 1 00:13:17.285 Atomic Write Unit (PFail): 1 00:13:17.285 Atomic Compare & Write Unit: 1 00:13:17.285 Fused Compare & Write: Supported 00:13:17.285 Scatter-Gather List 00:13:17.285 SGL Command Set: Supported (Dword aligned) 00:13:17.285 SGL Keyed: Not Supported 00:13:17.285 SGL Bit Bucket Descriptor: Not Supported 00:13:17.285 SGL Metadata Pointer: Not Supported 00:13:17.285 Oversized SGL: Not Supported 00:13:17.285 SGL Metadata Address: Not Supported 00:13:17.285 SGL Offset: Not Supported 00:13:17.285 Transport SGL Data Block: Not Supported 00:13:17.285 Replay Protected Memory Block: Not Supported 00:13:17.285 00:13:17.285 Firmware Slot Information 00:13:17.285 ========================= 00:13:17.285 Active slot: 1 00:13:17.285 Slot 1 Firmware Revision: 25.01 00:13:17.285 00:13:17.285 00:13:17.285 Commands Supported and Effects 00:13:17.285 ============================== 00:13:17.285 Admin Commands 00:13:17.285 -------------- 00:13:17.285 Get Log Page (02h): Supported 00:13:17.285 Identify (06h): Supported 00:13:17.285 Abort (08h): Supported 00:13:17.285 Set Features (09h): Supported 00:13:17.285 Get Features (0Ah): Supported 00:13:17.285 Asynchronous Event Request (0Ch): Supported 00:13:17.285 Keep Alive (18h): Supported 00:13:17.285 I/O Commands 00:13:17.285 ------------ 00:13:17.285 Flush (00h): Supported LBA-Change 00:13:17.285 Write (01h): Supported LBA-Change 00:13:17.285 Read (02h): Supported 00:13:17.285 Compare (05h): Supported 00:13:17.285 Write Zeroes (08h): Supported LBA-Change 00:13:17.285 Dataset Management (09h): Supported LBA-Change 00:13:17.285 Copy (19h): Supported LBA-Change 00:13:17.285 00:13:17.285 Error Log 00:13:17.285 ========= 00:13:17.285 00:13:17.285 Arbitration 00:13:17.285 =========== 00:13:17.285 Arbitration Burst: 1 00:13:17.285 00:13:17.285 Power Management 00:13:17.285 ================ 00:13:17.285 Number of Power States: 1 00:13:17.285 Current Power State: Power State #0 00:13:17.285 Power State #0: 00:13:17.285 Max Power: 0.00 W 00:13:17.285 Non-Operational State: Operational 00:13:17.285 Entry Latency: Not Reported 00:13:17.285 Exit Latency: Not Reported 00:13:17.285 Relative Read Throughput: 0 00:13:17.285 Relative Read Latency: 0 00:13:17.285 Relative Write Throughput: 0 00:13:17.285 Relative Write Latency: 0 00:13:17.285 Idle Power: Not Reported 00:13:17.285 Active Power: Not Reported 00:13:17.285 Non-Operational Permissive Mode: Not Supported 00:13:17.285 00:13:17.285 Health Information 00:13:17.285 ================== 00:13:17.285 Critical Warnings: 00:13:17.285 Available Spare Space: OK 00:13:17.285 Temperature: OK 00:13:17.285 Device Reliability: OK 00:13:17.285 Read Only: No 00:13:17.285 Volatile Memory Backup: OK 00:13:17.285 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:17.285 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:17.285 Available Spare: 0% 00:13:17.285 Available Sp[2024-11-20 18:50:39.475993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:17.285 [2024-11-20 18:50:39.476002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:17.285 [2024-11-20 18:50:39.476025] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:17.285 [2024-11-20 18:50:39.476034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.285 [2024-11-20 18:50:39.476040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.285 [2024-11-20 18:50:39.476045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.285 [2024-11-20 18:50:39.476050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.285 [2024-11-20 18:50:39.478209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:17.285 [2024-11-20 18:50:39.478219] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:17.285 [2024-11-20 18:50:39.479193] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:17.285 [2024-11-20 18:50:39.479246] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:17.285 [2024-11-20 18:50:39.479252] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:17.285 [2024-11-20 18:50:39.480204] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:17.285 [2024-11-20 18:50:39.480215] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:17.285 [2024-11-20 18:50:39.480261] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:17.285 [2024-11-20 18:50:39.481228] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:17.286 are Threshold: 0% 00:13:17.286 Life Percentage Used: 0% 00:13:17.286 Data Units Read: 0 00:13:17.286 Data Units Written: 0 00:13:17.286 Host Read Commands: 0 00:13:17.286 Host Write Commands: 0 00:13:17.286 Controller Busy Time: 0 minutes 00:13:17.286 Power Cycles: 0 00:13:17.286 Power On Hours: 0 hours 00:13:17.286 Unsafe Shutdowns: 0 00:13:17.286 Unrecoverable Media Errors: 0 00:13:17.286 Lifetime Error Log Entries: 0 00:13:17.286 Warning Temperature Time: 0 minutes 00:13:17.286 Critical Temperature Time: 0 minutes 00:13:17.286 00:13:17.286 Number of Queues 00:13:17.286 ================ 00:13:17.286 Number of I/O Submission Queues: 127 00:13:17.286 Number of I/O Completion Queues: 127 00:13:17.286 00:13:17.286 Active Namespaces 00:13:17.286 ================= 00:13:17.286 Namespace ID:1 00:13:17.286 Error Recovery Timeout: Unlimited 00:13:17.286 Command Set Identifier: NVM (00h) 00:13:17.286 Deallocate: Supported 00:13:17.286 Deallocated/Unwritten Error: Not Supported 00:13:17.286 Deallocated Read Value: Unknown 00:13:17.286 Deallocate in Write Zeroes: Not Supported 00:13:17.286 Deallocated Guard Field: 0xFFFF 00:13:17.286 Flush: Supported 00:13:17.286 Reservation: Supported 00:13:17.286 Namespace Sharing Capabilities: Multiple Controllers 00:13:17.286 Size (in LBAs): 131072 (0GiB) 00:13:17.286 Capacity (in LBAs): 131072 (0GiB) 00:13:17.286 Utilization (in LBAs): 131072 (0GiB) 00:13:17.286 NGUID: 3EE7317596FB4D9780C11AFC865AB887 00:13:17.286 UUID: 3ee73175-96fb-4d97-80c1-1afc865ab887 00:13:17.286 Thin Provisioning: Not Supported 00:13:17.286 Per-NS Atomic Units: Yes 00:13:17.286 Atomic Boundary Size (Normal): 0 00:13:17.286 Atomic Boundary Size (PFail): 0 00:13:17.286 Atomic Boundary Offset: 0 00:13:17.286 Maximum Single Source Range Length: 65535 00:13:17.286 Maximum Copy Length: 65535 00:13:17.286 Maximum Source Range Count: 1 00:13:17.286 NGUID/EUI64 Never Reused: No 00:13:17.286 Namespace Write Protected: No 00:13:17.286 Number of LBA Formats: 1 00:13:17.286 Current LBA Format: LBA Format #00 00:13:17.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:17.286 00:13:17.286 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:17.543 [2024-11-20 18:50:39.711219] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.799 Initializing NVMe Controllers 00:13:22.799 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:22.799 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:22.799 Initialization complete. Launching workers. 00:13:22.799 ======================================================== 00:13:22.799 Latency(us) 00:13:22.799 Device Information : IOPS MiB/s Average min max 00:13:22.799 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39911.60 155.90 3207.45 948.44 7637.69 00:13:22.799 ======================================================== 00:13:22.799 Total : 39911.60 155.90 3207.45 948.44 7637.69 00:13:22.799 00:13:22.800 [2024-11-20 18:50:44.732902] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.800 18:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:22.800 [2024-11-20 18:50:44.966940] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:28.052 Initializing NVMe Controllers 00:13:28.052 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.052 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:28.052 Initialization complete. Launching workers. 00:13:28.052 ======================================================== 00:13:28.052 Latency(us) 00:13:28.052 Device Information : IOPS MiB/s Average min max 00:13:28.052 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8009.87 4986.36 15963.82 00:13:28.052 ======================================================== 00:13:28.052 Total : 16000.00 62.50 8009.87 4986.36 15963.82 00:13:28.052 00:13:28.052 [2024-11-20 18:50:50.003261] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:28.052 18:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:28.052 [2024-11-20 18:50:50.216272] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.311 [2024-11-20 18:50:55.287480] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.311 Initializing NVMe Controllers 00:13:33.311 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:33.311 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:33.311 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:33.311 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:33.311 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:33.311 Initialization complete. Launching workers. 00:13:33.311 Starting thread on core 2 00:13:33.311 Starting thread on core 3 00:13:33.311 Starting thread on core 1 00:13:33.311 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:33.311 [2024-11-20 18:50:55.588615] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:36.588 [2024-11-20 18:50:58.651464] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:36.588 Initializing NVMe Controllers 00:13:36.588 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.588 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:36.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:36.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:36.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:36.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:36.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:36.588 Initialization complete. Launching workers. 00:13:36.589 Starting thread on core 1 with urgent priority queue 00:13:36.589 Starting thread on core 2 with urgent priority queue 00:13:36.589 Starting thread on core 3 with urgent priority queue 00:13:36.589 Starting thread on core 0 with urgent priority queue 00:13:36.589 SPDK bdev Controller (SPDK1 ) core 0: 7861.67 IO/s 12.72 secs/100000 ios 00:13:36.589 SPDK bdev Controller (SPDK1 ) core 1: 7890.33 IO/s 12.67 secs/100000 ios 00:13:36.589 SPDK bdev Controller (SPDK1 ) core 2: 8181.00 IO/s 12.22 secs/100000 ios 00:13:36.589 SPDK bdev Controller (SPDK1 ) core 3: 10081.00 IO/s 9.92 secs/100000 ios 00:13:36.589 ======================================================== 00:13:36.589 00:13:36.589 18:50:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:36.845 [2024-11-20 18:50:58.941646] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:36.845 Initializing NVMe Controllers 00:13:36.845 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.845 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.845 Namespace ID: 1 size: 0GB 00:13:36.845 Initialization complete. 00:13:36.845 INFO: using host memory buffer for IO 00:13:36.845 Hello world! 00:13:36.845 [2024-11-20 18:50:58.973860] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:36.845 18:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:37.101 [2024-11-20 18:50:59.262649] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.033 Initializing NVMe Controllers 00:13:38.033 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.033 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.033 Initialization complete. Launching workers. 00:13:38.034 submit (in ns) avg, min, max = 7615.6, 3161.9, 5992770.5 00:13:38.034 complete (in ns) avg, min, max = 18778.1, 1725.7, 5992563.8 00:13:38.034 00:13:38.034 Submit histogram 00:13:38.034 ================ 00:13:38.034 Range in us Cumulative Count 00:13:38.034 3.154 - 3.170: 0.0060% ( 1) 00:13:38.034 3.185 - 3.200: 0.0120% ( 1) 00:13:38.034 3.200 - 3.215: 0.1385% ( 21) 00:13:38.034 3.215 - 3.230: 0.6924% ( 92) 00:13:38.034 3.230 - 3.246: 1.9567% ( 210) 00:13:38.034 3.246 - 3.261: 4.1662% ( 367) 00:13:38.034 3.261 - 3.276: 7.3209% ( 524) 00:13:38.034 3.276 - 3.291: 12.6731% ( 889) 00:13:38.034 3.291 - 3.307: 18.5551% ( 977) 00:13:38.034 3.307 - 3.322: 24.2685% ( 949) 00:13:38.034 3.322 - 3.337: 30.9573% ( 1111) 00:13:38.034 3.337 - 3.352: 36.7550% ( 963) 00:13:38.034 3.352 - 3.368: 42.4022% ( 938) 00:13:38.034 3.368 - 3.383: 48.9223% ( 1083) 00:13:38.034 3.383 - 3.398: 55.0873% ( 1024) 00:13:38.034 3.398 - 3.413: 59.9579% ( 809) 00:13:38.034 3.413 - 3.429: 66.2071% ( 1038) 00:13:38.034 3.429 - 3.444: 72.6189% ( 1065) 00:13:38.034 3.444 - 3.459: 76.6165% ( 664) 00:13:38.034 3.459 - 3.474: 80.3010% ( 612) 00:13:38.034 3.474 - 3.490: 82.8176% ( 418) 00:13:38.034 3.490 - 3.505: 84.3227% ( 250) 00:13:38.034 3.505 - 3.520: 85.3642% ( 173) 00:13:38.034 3.520 - 3.535: 86.0807% ( 119) 00:13:38.034 3.535 - 3.550: 86.7128% ( 105) 00:13:38.034 3.550 - 3.566: 87.1824% ( 78) 00:13:38.034 3.566 - 3.581: 87.8507% ( 111) 00:13:38.034 3.581 - 3.596: 88.6033% ( 125) 00:13:38.034 3.596 - 3.611: 89.6388% ( 172) 00:13:38.034 3.611 - 3.627: 90.4877% ( 141) 00:13:38.034 3.627 - 3.642: 91.4871% ( 166) 00:13:38.034 3.642 - 3.657: 92.5587% ( 178) 00:13:38.034 3.657 - 3.672: 93.5882% ( 171) 00:13:38.034 3.672 - 3.688: 94.4973% ( 151) 00:13:38.034 3.688 - 3.703: 95.3221% ( 137) 00:13:38.034 3.703 - 3.718: 96.0265% ( 117) 00:13:38.034 3.718 - 3.733: 96.6105% ( 97) 00:13:38.034 3.733 - 3.749: 97.1162% ( 84) 00:13:38.034 3.749 - 3.764: 97.3691% ( 42) 00:13:38.034 3.764 - 3.779: 97.6340% ( 44) 00:13:38.034 3.779 - 3.794: 97.9290% ( 49) 00:13:38.034 3.794 - 3.810: 98.0614% ( 22) 00:13:38.034 3.810 - 3.825: 98.2240% ( 27) 00:13:38.034 3.825 - 3.840: 98.3564% ( 22) 00:13:38.034 3.840 - 3.855: 98.4287% ( 12) 00:13:38.034 3.855 - 3.870: 98.5250% ( 16) 00:13:38.034 3.870 - 3.886: 98.5912% ( 11) 00:13:38.034 3.886 - 3.901: 98.6514% ( 10) 00:13:38.034 3.901 - 3.931: 98.8200% ( 28) 00:13:38.034 3.931 - 3.962: 98.9284% ( 18) 00:13:38.034 3.962 - 3.992: 98.9946% ( 11) 00:13:38.034 3.992 - 4.023: 99.0909% ( 16) 00:13:38.034 4.023 - 4.053: 99.1571% ( 11) 00:13:38.034 4.053 - 4.084: 99.1812% ( 4) 00:13:38.034 4.084 - 4.114: 99.2113% ( 5) 00:13:38.034 4.114 - 4.145: 99.2354% ( 4) 00:13:38.034 4.145 - 4.175: 99.2474% ( 2) 00:13:38.034 4.175 - 4.206: 99.3016% ( 9) 00:13:38.034 4.206 - 4.236: 99.3317% ( 5) 00:13:38.034 4.236 - 4.267: 99.3618% ( 5) 00:13:38.034 4.267 - 4.297: 99.3679% ( 1) 00:13:38.034 4.297 - 4.328: 99.3739% ( 1) 00:13:38.034 4.389 - 4.419: 99.3799% ( 1) 00:13:38.034 4.419 - 4.450: 99.3980% ( 3) 00:13:38.034 4.510 - 4.541: 99.4040% ( 1) 00:13:38.034 4.571 - 4.602: 99.4100% ( 1) 00:13:38.034 4.663 - 4.693: 99.4160% ( 1) 00:13:38.034 4.724 - 4.754: 99.4220% ( 1) 00:13:38.034 4.815 - 4.846: 99.4341% ( 2) 00:13:38.034 4.846 - 4.876: 99.4401% ( 1) 00:13:38.034 5.242 - 5.272: 99.4521% ( 2) 00:13:38.034 5.272 - 5.303: 99.4582% ( 1) 00:13:38.034 5.394 - 5.425: 99.4642% ( 1) 00:13:38.034 5.516 - 5.547: 99.4702% ( 1) 00:13:38.034 5.547 - 5.577: 99.4822% ( 2) 00:13:38.034 5.730 - 5.760: 99.4883% ( 1) 00:13:38.034 5.821 - 5.851: 99.4943% ( 1) 00:13:38.034 5.912 - 5.943: 99.5003% ( 1) 00:13:38.034 6.004 - 6.034: 99.5063% ( 1) 00:13:38.034 6.034 - 6.065: 99.5123% ( 1) 00:13:38.034 6.126 - 6.156: 99.5184% ( 1) 00:13:38.034 6.217 - 6.248: 99.5244% ( 1) 00:13:38.034 6.278 - 6.309: 99.5364% ( 2) 00:13:38.034 6.370 - 6.400: 99.5424% ( 1) 00:13:38.034 6.461 - 6.491: 99.5485% ( 1) 00:13:38.034 6.613 - 6.644: 99.5545% ( 1) 00:13:38.034 6.735 - 6.766: 99.5605% ( 1) 00:13:38.034 6.766 - 6.796: 99.5665% ( 1) 00:13:38.034 6.857 - 6.888: 99.5725% ( 1) 00:13:38.034 7.040 - 7.070: 99.5786% ( 1) 00:13:38.034 7.131 - 7.162: 99.5846% ( 1) 00:13:38.034 7.162 - 7.192: 99.5906% ( 1) 00:13:38.034 7.253 - 7.284: 99.5966% ( 1) 00:13:38.034 7.284 - 7.314: 99.6207% ( 4) 00:13:38.034 7.314 - 7.345: 99.6267% ( 1) 00:13:38.034 7.406 - 7.436: 99.6388% ( 2) 00:13:38.034 7.467 - 7.497: 99.6568% ( 3) 00:13:38.034 7.497 - 7.528: 99.6629% ( 1) 00:13:38.034 7.589 - 7.619: 99.6749% ( 2) 00:13:38.034 7.619 - 7.650: 99.6809% ( 1) 00:13:38.034 7.741 - 7.771: 99.6930% ( 2) 00:13:38.034 7.802 - 7.863: 99.7050% ( 2) 00:13:38.034 7.985 - 8.046: 99.7170% ( 2) 00:13:38.034 8.046 - 8.107: 99.7291% ( 2) 00:13:38.034 8.168 - 8.229: 99.7351% ( 1) 00:13:38.034 8.229 - 8.290: 99.7411% ( 1) 00:13:38.034 8.290 - 8.350: 99.7471% ( 1) 00:13:38.034 8.472 - 8.533: 99.7592% ( 2) 00:13:38.034 8.594 - 8.655: 99.7652% ( 1) 00:13:38.034 8.655 - 8.716: 99.7712% ( 1) 00:13:38.034 8.716 - 8.777: 99.7772% ( 1) 00:13:38.034 8.838 - 8.899: 99.7833% ( 1) 00:13:38.034 8.899 - 8.960: 99.7953% ( 2) 00:13:38.034 8.960 - 9.021: 99.8073% ( 2) 00:13:38.034 9.021 - 9.082: 99.8134% ( 1) 00:13:38.034 9.326 - 9.387: 99.8194% ( 1) 00:13:38.034 9.509 - 9.570: 99.8314% ( 2) 00:13:38.034 9.935 - 9.996: 99.8374% ( 1) 00:13:38.034 12.434 - 12.495: 99.8435% ( 1) 00:13:38.034 12.861 - 12.922: 99.8495% ( 1) 00:13:38.034 13.531 - 13.592: 99.8555% ( 1) 00:13:38.034 13.714 - 13.775: 99.8615% ( 1) 00:13:38.034 14.568 - 14.629: 99.8675% ( 1) 00:13:38.034 14.750 - 14.811: 99.8736% ( 1) 00:13:38.034 20.114 - 20.236: 99.8796% ( 1) 00:13:38.034 27.429 - 27.550: 99.8856% ( 1) 00:13:38.034 40.716 - 40.960: 99.8916% ( 1) 00:13:38.034 49.006 - 49.250: 99.8977% ( 1) 00:13:38.034 3448.442 - 3464.046: 99.9037% ( 1) 00:13:38.034 3994.575 - 4025.783: 99.9940% ( 15) 00:13:38.034 5991.863 - 6023.070: 100.0000% ( 1) 00:13:38.034 00:13:38.034 Complete histogram 00:13:38.034 ================== 00:13:38.034 Range in us Cumulative Count 00:13:38.034 1.722 - 1.730: 0.0060% ( 1) 00:13:38.034 1.752 - 1.760: 0.0361% ( 5) 00:13:38.034 1.760 - 1.768: 0.3311% ( 49) 00:13:38.034 1.768 - 1.775: 1.6616% ( 221) 00:13:38.034 1.775 - 1.783: 3.4437% ( 296) 00:13:38.034 1.783 - 1.790: 4.7923% ( 224) 00:13:38.034 1.790 - 1.798: 5.4786% ( 114) 00:13:38.034 1.798 - 1.806: 6.3817% ( 150) 00:13:38.034 1.806 - 1.813: 12.8597% ( 1076) 00:13:38.034 1.813 - 1.821: 37.0560% ( 4019) 00:13:38.034 1.821 - 1.829: 66.9837% ( 4971) 00:13:38.034 1.829 - 1.836: 82.4082% ( 2562) 00:13:38.034 1.836 - 1.844: 87.9952% ( 928) 00:13:38.034 1.844 - 1.851: 90.7827% ( 463) 00:13:38.034 1.851 - 1.859: 92.4624% ( 279) 00:13:38.034 1.859 - 1.867: 93.1427% ( 113) 00:13:38.034 1.867 - 1.874: 93.4437% ( 50) 00:13:38.034 1.874 - 1.882: 93.7869% ( 57) 00:13:38.034 1.882 - 1.890: 94.5394% ( 125) 00:13:38.034 1.890 - 1.897: 95.4606% ( 153) 00:13:38.034 1.897 - 1.905: 96.1650% ( 117) 00:13:38.034 1.905 - 1.912: 96.6165% ( 75) 00:13:38.034 1.912 - 1.920: 96.7971% ( 30) 00:13:38.034 1.920 - 1.928: 96.9597% ( 27) 00:13:38.034 1.928 - 1.935: 97.1222% ( 27) 00:13:38.034 1.935 - 1.943: 97.2848% ( 27) 00:13:38.034 1.943 - 1.950: 97.4955% ( 35) 00:13:38.034 1.950 - 1.966: 97.6520% ( 26) 00:13:38.034 1.966 - 1.981: 97.6881% ( 6) 00:13:38.034 1.981 - 1.996: 97.7182% ( 5) 00:13:38.034 1.996 - 2.011: 97.7544% ( 6) 00:13:38.034 2.011 - 2.027: 97.8146% ( 10) 00:13:38.034 2.027 - 2.042: 97.8326% ( 3) 00:13:38.034 2.042 - 2.0[2024-11-20 18:51:00.280555] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.034 57: 97.8808% ( 8) 00:13:38.034 2.057 - 2.072: 97.9892% ( 18) 00:13:38.034 2.072 - 2.088: 98.1156% ( 21) 00:13:38.034 2.088 - 2.103: 98.1698% ( 9) 00:13:38.034 2.103 - 2.118: 98.1999% ( 5) 00:13:38.034 2.118 - 2.133: 98.2119% ( 2) 00:13:38.034 2.133 - 2.149: 98.2360% ( 4) 00:13:38.034 2.149 - 2.164: 98.2902% ( 9) 00:13:38.034 2.164 - 2.179: 98.3022% ( 2) 00:13:38.034 2.179 - 2.194: 98.3082% ( 1) 00:13:38.035 2.194 - 2.210: 98.4226% ( 19) 00:13:38.035 2.210 - 2.225: 98.9103% ( 81) 00:13:38.035 2.225 - 2.240: 99.0126% ( 17) 00:13:38.035 2.240 - 2.255: 99.0548% ( 7) 00:13:38.035 2.255 - 2.270: 99.1030% ( 8) 00:13:38.035 2.270 - 2.286: 99.1210% ( 3) 00:13:38.035 2.301 - 2.316: 99.1270% ( 1) 00:13:38.035 2.316 - 2.331: 99.1451% ( 3) 00:13:38.035 2.331 - 2.347: 99.1571% ( 2) 00:13:38.035 2.347 - 2.362: 99.1632% ( 1) 00:13:38.035 2.362 - 2.377: 99.1692% ( 1) 00:13:38.035 2.377 - 2.392: 99.1752% ( 1) 00:13:38.035 2.392 - 2.408: 99.1812% ( 1) 00:13:38.035 2.408 - 2.423: 99.1933% ( 2) 00:13:38.035 2.438 - 2.453: 99.1993% ( 1) 00:13:38.035 2.469 - 2.484: 99.2053% ( 1) 00:13:38.035 2.499 - 2.514: 99.2113% ( 1) 00:13:38.035 2.575 - 2.590: 99.2234% ( 2) 00:13:38.035 2.697 - 2.712: 99.2294% ( 1) 00:13:38.035 2.804 - 2.819: 99.2354% ( 1) 00:13:38.035 2.880 - 2.895: 99.2414% ( 1) 00:13:38.035 2.910 - 2.926: 99.2474% ( 1) 00:13:38.035 2.941 - 2.956: 99.2535% ( 1) 00:13:38.035 3.185 - 3.200: 99.2595% ( 1) 00:13:38.035 3.413 - 3.429: 99.2655% ( 1) 00:13:38.035 3.490 - 3.505: 99.2715% ( 1) 00:13:38.035 3.672 - 3.688: 99.2775% ( 1) 00:13:38.035 3.764 - 3.779: 99.2836% ( 1) 00:13:38.035 3.901 - 3.931: 99.2896% ( 1) 00:13:38.035 3.931 - 3.962: 99.2956% ( 1) 00:13:38.035 4.084 - 4.114: 99.3016% ( 1) 00:13:38.035 4.571 - 4.602: 99.3076% ( 1) 00:13:38.035 4.632 - 4.663: 99.3137% ( 1) 00:13:38.035 4.724 - 4.754: 99.3197% ( 1) 00:13:38.035 4.785 - 4.815: 99.3257% ( 1) 00:13:38.035 4.846 - 4.876: 99.3317% ( 1) 00:13:38.035 4.876 - 4.907: 99.3377% ( 1) 00:13:38.035 4.907 - 4.937: 99.3438% ( 1) 00:13:38.035 4.937 - 4.968: 99.3498% ( 1) 00:13:38.035 5.150 - 5.181: 99.3618% ( 2) 00:13:38.035 5.181 - 5.211: 99.3739% ( 2) 00:13:38.035 5.333 - 5.364: 99.3799% ( 1) 00:13:38.035 5.364 - 5.394: 99.3859% ( 1) 00:13:38.035 5.394 - 5.425: 99.4040% ( 3) 00:13:38.035 5.516 - 5.547: 99.4100% ( 1) 00:13:38.035 5.699 - 5.730: 99.4160% ( 1) 00:13:38.035 5.730 - 5.760: 99.4220% ( 1) 00:13:38.035 5.760 - 5.790: 99.4281% ( 1) 00:13:38.035 5.851 - 5.882: 99.4341% ( 1) 00:13:38.035 6.187 - 6.217: 99.4401% ( 1) 00:13:38.035 6.309 - 6.339: 99.4521% ( 2) 00:13:38.035 6.370 - 6.400: 99.4582% ( 1) 00:13:38.035 6.400 - 6.430: 99.4642% ( 1) 00:13:38.035 6.552 - 6.583: 99.4702% ( 1) 00:13:38.035 6.613 - 6.644: 99.4822% ( 2) 00:13:38.035 6.766 - 6.796: 99.4883% ( 1) 00:13:38.035 6.857 - 6.888: 99.4943% ( 1) 00:13:38.035 6.888 - 6.918: 99.5003% ( 1) 00:13:38.035 6.918 - 6.949: 99.5063% ( 1) 00:13:38.035 7.010 - 7.040: 99.5123% ( 1) 00:13:38.035 7.131 - 7.162: 99.5184% ( 1) 00:13:38.035 7.253 - 7.284: 99.5244% ( 1) 00:13:38.035 7.497 - 7.528: 99.5304% ( 1) 00:13:38.035 8.350 - 8.411: 99.5364% ( 1) 00:13:38.035 8.411 - 8.472: 99.5424% ( 1) 00:13:38.035 9.630 - 9.691: 99.5485% ( 1) 00:13:38.035 9.752 - 9.813: 99.5545% ( 1) 00:13:38.035 11.825 - 11.886: 99.5605% ( 1) 00:13:38.035 13.836 - 13.897: 99.5665% ( 1) 00:13:38.035 14.202 - 14.263: 99.5725% ( 1) 00:13:38.035 16.335 - 16.457: 99.5786% ( 1) 00:13:38.035 3526.461 - 3542.065: 99.5846% ( 1) 00:13:38.035 3994.575 - 4025.783: 99.9940% ( 68) 00:13:38.035 5991.863 - 6023.070: 100.0000% ( 1) 00:13:38.035 00:13:38.035 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:38.035 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:38.035 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:38.035 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:38.035 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:38.292 [ 00:13:38.292 { 00:13:38.292 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:38.292 "subtype": "Discovery", 00:13:38.292 "listen_addresses": [], 00:13:38.292 "allow_any_host": true, 00:13:38.292 "hosts": [] 00:13:38.292 }, 00:13:38.292 { 00:13:38.292 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:38.292 "subtype": "NVMe", 00:13:38.292 "listen_addresses": [ 00:13:38.292 { 00:13:38.292 "trtype": "VFIOUSER", 00:13:38.292 "adrfam": "IPv4", 00:13:38.292 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:38.292 "trsvcid": "0" 00:13:38.292 } 00:13:38.292 ], 00:13:38.292 "allow_any_host": true, 00:13:38.292 "hosts": [], 00:13:38.292 "serial_number": "SPDK1", 00:13:38.292 "model_number": "SPDK bdev Controller", 00:13:38.292 "max_namespaces": 32, 00:13:38.292 "min_cntlid": 1, 00:13:38.292 "max_cntlid": 65519, 00:13:38.292 "namespaces": [ 00:13:38.292 { 00:13:38.292 "nsid": 1, 00:13:38.292 "bdev_name": "Malloc1", 00:13:38.292 "name": "Malloc1", 00:13:38.292 "nguid": "3EE7317596FB4D9780C11AFC865AB887", 00:13:38.292 "uuid": "3ee73175-96fb-4d97-80c1-1afc865ab887" 00:13:38.292 } 00:13:38.292 ] 00:13:38.292 }, 00:13:38.292 { 00:13:38.292 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:38.292 "subtype": "NVMe", 00:13:38.292 "listen_addresses": [ 00:13:38.292 { 00:13:38.292 "trtype": "VFIOUSER", 00:13:38.292 "adrfam": "IPv4", 00:13:38.292 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:38.292 "trsvcid": "0" 00:13:38.292 } 00:13:38.292 ], 00:13:38.292 "allow_any_host": true, 00:13:38.292 "hosts": [], 00:13:38.292 "serial_number": "SPDK2", 00:13:38.292 "model_number": "SPDK bdev Controller", 00:13:38.292 "max_namespaces": 32, 00:13:38.292 "min_cntlid": 1, 00:13:38.292 "max_cntlid": 65519, 00:13:38.292 "namespaces": [ 00:13:38.292 { 00:13:38.292 "nsid": 1, 00:13:38.292 "bdev_name": "Malloc2", 00:13:38.292 "name": "Malloc2", 00:13:38.292 "nguid": "79DA801586FB4E9DBADEBEED3AE3AF53", 00:13:38.292 "uuid": "79da8015-86fb-4e9d-bade-beed3ae3af53" 00:13:38.292 } 00:13:38.292 ] 00:13:38.292 } 00:13:38.292 ] 00:13:38.292 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:38.292 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:38.292 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3608540 00:13:38.292 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:38.293 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:38.293 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:38.293 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:38.293 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:38.293 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:38.293 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:38.550 [2024-11-20 18:51:00.676505] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.550 Malloc3 00:13:38.550 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:38.806 [2024-11-20 18:51:00.939552] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.806 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:38.806 Asynchronous Event Request test 00:13:38.806 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.806 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.806 Registering asynchronous event callbacks... 00:13:38.806 Starting namespace attribute notice tests for all controllers... 00:13:38.806 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:38.806 aer_cb - Changed Namespace 00:13:38.806 Cleaning up... 00:13:39.066 [ 00:13:39.066 { 00:13:39.066 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.066 "subtype": "Discovery", 00:13:39.066 "listen_addresses": [], 00:13:39.066 "allow_any_host": true, 00:13:39.066 "hosts": [] 00:13:39.066 }, 00:13:39.066 { 00:13:39.066 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:39.066 "subtype": "NVMe", 00:13:39.066 "listen_addresses": [ 00:13:39.066 { 00:13:39.066 "trtype": "VFIOUSER", 00:13:39.066 "adrfam": "IPv4", 00:13:39.066 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:39.066 "trsvcid": "0" 00:13:39.066 } 00:13:39.066 ], 00:13:39.066 "allow_any_host": true, 00:13:39.066 "hosts": [], 00:13:39.066 "serial_number": "SPDK1", 00:13:39.066 "model_number": "SPDK bdev Controller", 00:13:39.066 "max_namespaces": 32, 00:13:39.066 "min_cntlid": 1, 00:13:39.066 "max_cntlid": 65519, 00:13:39.066 "namespaces": [ 00:13:39.066 { 00:13:39.066 "nsid": 1, 00:13:39.066 "bdev_name": "Malloc1", 00:13:39.066 "name": "Malloc1", 00:13:39.066 "nguid": "3EE7317596FB4D9780C11AFC865AB887", 00:13:39.066 "uuid": "3ee73175-96fb-4d97-80c1-1afc865ab887" 00:13:39.066 }, 00:13:39.066 { 00:13:39.066 "nsid": 2, 00:13:39.066 "bdev_name": "Malloc3", 00:13:39.066 "name": "Malloc3", 00:13:39.066 "nguid": "10225D7C6A72411B99D5B21F582F2C67", 00:13:39.066 "uuid": "10225d7c-6a72-411b-99d5-b21f582f2c67" 00:13:39.066 } 00:13:39.066 ] 00:13:39.066 }, 00:13:39.066 { 00:13:39.066 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:39.066 "subtype": "NVMe", 00:13:39.066 "listen_addresses": [ 00:13:39.066 { 00:13:39.066 "trtype": "VFIOUSER", 00:13:39.066 "adrfam": "IPv4", 00:13:39.066 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:39.066 "trsvcid": "0" 00:13:39.066 } 00:13:39.066 ], 00:13:39.066 "allow_any_host": true, 00:13:39.066 "hosts": [], 00:13:39.066 "serial_number": "SPDK2", 00:13:39.066 "model_number": "SPDK bdev Controller", 00:13:39.066 "max_namespaces": 32, 00:13:39.066 "min_cntlid": 1, 00:13:39.066 "max_cntlid": 65519, 00:13:39.066 "namespaces": [ 00:13:39.066 { 00:13:39.066 "nsid": 1, 00:13:39.066 "bdev_name": "Malloc2", 00:13:39.066 "name": "Malloc2", 00:13:39.066 "nguid": "79DA801586FB4E9DBADEBEED3AE3AF53", 00:13:39.066 "uuid": "79da8015-86fb-4e9d-bade-beed3ae3af53" 00:13:39.066 } 00:13:39.066 ] 00:13:39.066 } 00:13:39.066 ] 00:13:39.066 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3608540 00:13:39.066 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:39.066 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:39.066 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:39.066 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:39.066 [2024-11-20 18:51:01.210463] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:13:39.066 [2024-11-20 18:51:01.210498] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608789 ] 00:13:39.066 [2024-11-20 18:51:01.250597] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:39.066 [2024-11-20 18:51:01.255409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:39.066 [2024-11-20 18:51:01.255433] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f24e840d000 00:13:39.066 [2024-11-20 18:51:01.256407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.257412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.258413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.259426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.260428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.261433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.262437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.263442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.066 [2024-11-20 18:51:01.264452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:39.066 [2024-11-20 18:51:01.264462] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f24e8402000 00:13:39.066 [2024-11-20 18:51:01.265381] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:39.066 [2024-11-20 18:51:01.276716] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:39.066 [2024-11-20 18:51:01.276741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:39.066 [2024-11-20 18:51:01.280807] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:39.066 [2024-11-20 18:51:01.280844] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:39.066 [2024-11-20 18:51:01.280911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:39.066 [2024-11-20 18:51:01.280923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:39.066 [2024-11-20 18:51:01.280928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:39.066 [2024-11-20 18:51:01.281818] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:39.066 [2024-11-20 18:51:01.281829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:39.066 [2024-11-20 18:51:01.281838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:39.066 [2024-11-20 18:51:01.282819] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:39.066 [2024-11-20 18:51:01.282827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:39.067 [2024-11-20 18:51:01.282834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:39.067 [2024-11-20 18:51:01.286210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:39.067 [2024-11-20 18:51:01.286219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:39.067 [2024-11-20 18:51:01.286841] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:39.067 [2024-11-20 18:51:01.286850] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:39.067 [2024-11-20 18:51:01.286854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:39.067 [2024-11-20 18:51:01.286860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:39.067 [2024-11-20 18:51:01.286967] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:39.067 [2024-11-20 18:51:01.286972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:39.067 [2024-11-20 18:51:01.286976] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:39.067 [2024-11-20 18:51:01.287850] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:39.067 [2024-11-20 18:51:01.288859] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:39.067 [2024-11-20 18:51:01.289869] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:39.067 [2024-11-20 18:51:01.290875] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.067 [2024-11-20 18:51:01.290916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:39.067 [2024-11-20 18:51:01.291882] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:39.067 [2024-11-20 18:51:01.291891] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:39.067 [2024-11-20 18:51:01.291896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.291913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:39.067 [2024-11-20 18:51:01.291923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.291934] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:39.067 [2024-11-20 18:51:01.291941] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.067 [2024-11-20 18:51:01.291944] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.067 [2024-11-20 18:51:01.291956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.067 [2024-11-20 18:51:01.297211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:39.067 [2024-11-20 18:51:01.297223] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:39.067 [2024-11-20 18:51:01.297228] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:39.067 [2024-11-20 18:51:01.297231] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:39.067 [2024-11-20 18:51:01.297236] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:39.067 [2024-11-20 18:51:01.297242] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:39.067 [2024-11-20 18:51:01.297246] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:39.067 [2024-11-20 18:51:01.297251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.297259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.297269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:39.067 [2024-11-20 18:51:01.305209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:39.067 [2024-11-20 18:51:01.305221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.067 [2024-11-20 18:51:01.305229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.067 [2024-11-20 18:51:01.305236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.067 [2024-11-20 18:51:01.305244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.067 [2024-11-20 18:51:01.305248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.305254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.305262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:39.067 [2024-11-20 18:51:01.313209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:39.067 [2024-11-20 18:51:01.313220] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:39.067 [2024-11-20 18:51:01.313225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.313231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.313238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.313247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:39.067 [2024-11-20 18:51:01.321208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:39.067 [2024-11-20 18:51:01.321264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.321272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.321279] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:39.067 [2024-11-20 18:51:01.321284] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:39.067 [2024-11-20 18:51:01.321287] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.067 [2024-11-20 18:51:01.321293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:39.067 [2024-11-20 18:51:01.329207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:39.067 [2024-11-20 18:51:01.329218] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:39.067 [2024-11-20 18:51:01.329229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.329236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.329244] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:39.067 [2024-11-20 18:51:01.329248] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.067 [2024-11-20 18:51:01.329251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.067 [2024-11-20 18:51:01.329257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.067 [2024-11-20 18:51:01.337207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:39.067 [2024-11-20 18:51:01.337222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.337228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.337235] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:39.067 [2024-11-20 18:51:01.337239] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.067 [2024-11-20 18:51:01.337242] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.067 [2024-11-20 18:51:01.337248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.067 [2024-11-20 18:51:01.345207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:39.067 [2024-11-20 18:51:01.345216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.345222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.345232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.345237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.345242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.345246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:39.067 [2024-11-20 18:51:01.345250] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:39.067 [2024-11-20 18:51:01.345254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:39.068 [2024-11-20 18:51:01.345259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:39.068 [2024-11-20 18:51:01.345275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:39.068 [2024-11-20 18:51:01.353207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:39.068 [2024-11-20 18:51:01.353220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:39.068 [2024-11-20 18:51:01.361207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:39.068 [2024-11-20 18:51:01.361220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:39.068 [2024-11-20 18:51:01.369209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:39.068 [2024-11-20 18:51:01.369221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:39.068 [2024-11-20 18:51:01.377208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:39.068 [2024-11-20 18:51:01.377224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:39.068 [2024-11-20 18:51:01.377229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:39.068 [2024-11-20 18:51:01.377232] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:39.068 [2024-11-20 18:51:01.377235] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:39.068 [2024-11-20 18:51:01.377238] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:39.068 [2024-11-20 18:51:01.377244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:39.068 [2024-11-20 18:51:01.377251] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:39.068 [2024-11-20 18:51:01.377255] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:39.068 [2024-11-20 18:51:01.377258] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.068 [2024-11-20 18:51:01.377263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:39.068 [2024-11-20 18:51:01.377269] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:39.068 [2024-11-20 18:51:01.377275] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:39.068 [2024-11-20 18:51:01.377278] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.068 [2024-11-20 18:51:01.377283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:39.068 [2024-11-20 18:51:01.377290] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:39.068 [2024-11-20 18:51:01.377294] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:39.068 [2024-11-20 18:51:01.377297] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:39.068 [2024-11-20 18:51:01.377302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:39.068 [2024-11-20 18:51:01.385220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:39.068 [2024-11-20 18:51:01.385244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:39.068 [2024-11-20 18:51:01.385254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:39.068 [2024-11-20 18:51:01.385261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:39.068 ===================================================== 00:13:39.068 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:39.068 ===================================================== 00:13:39.068 Controller Capabilities/Features 00:13:39.068 ================================ 00:13:39.068 Vendor ID: 4e58 00:13:39.068 Subsystem Vendor ID: 4e58 00:13:39.068 Serial Number: SPDK2 00:13:39.068 Model Number: SPDK bdev Controller 00:13:39.068 Firmware Version: 25.01 00:13:39.068 Recommended Arb Burst: 6 00:13:39.068 IEEE OUI Identifier: 8d 6b 50 00:13:39.068 Multi-path I/O 00:13:39.068 May have multiple subsystem ports: Yes 00:13:39.068 May have multiple controllers: Yes 00:13:39.068 Associated with SR-IOV VF: No 00:13:39.068 Max Data Transfer Size: 131072 00:13:39.068 Max Number of Namespaces: 32 00:13:39.068 Max Number of I/O Queues: 127 00:13:39.068 NVMe Specification Version (VS): 1.3 00:13:39.068 NVMe Specification Version (Identify): 1.3 00:13:39.068 Maximum Queue Entries: 256 00:13:39.068 Contiguous Queues Required: Yes 00:13:39.068 Arbitration Mechanisms Supported 00:13:39.068 Weighted Round Robin: Not Supported 00:13:39.068 Vendor Specific: Not Supported 00:13:39.068 Reset Timeout: 15000 ms 00:13:39.068 Doorbell Stride: 4 bytes 00:13:39.068 NVM Subsystem Reset: Not Supported 00:13:39.068 Command Sets Supported 00:13:39.068 NVM Command Set: Supported 00:13:39.068 Boot Partition: Not Supported 00:13:39.068 Memory Page Size Minimum: 4096 bytes 00:13:39.068 Memory Page Size Maximum: 4096 bytes 00:13:39.068 Persistent Memory Region: Not Supported 00:13:39.068 Optional Asynchronous Events Supported 00:13:39.068 Namespace Attribute Notices: Supported 00:13:39.068 Firmware Activation Notices: Not Supported 00:13:39.068 ANA Change Notices: Not Supported 00:13:39.068 PLE Aggregate Log Change Notices: Not Supported 00:13:39.068 LBA Status Info Alert Notices: Not Supported 00:13:39.068 EGE Aggregate Log Change Notices: Not Supported 00:13:39.068 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.068 Zone Descriptor Change Notices: Not Supported 00:13:39.068 Discovery Log Change Notices: Not Supported 00:13:39.068 Controller Attributes 00:13:39.068 128-bit Host Identifier: Supported 00:13:39.068 Non-Operational Permissive Mode: Not Supported 00:13:39.068 NVM Sets: Not Supported 00:13:39.068 Read Recovery Levels: Not Supported 00:13:39.068 Endurance Groups: Not Supported 00:13:39.068 Predictable Latency Mode: Not Supported 00:13:39.068 Traffic Based Keep ALive: Not Supported 00:13:39.068 Namespace Granularity: Not Supported 00:13:39.068 SQ Associations: Not Supported 00:13:39.068 UUID List: Not Supported 00:13:39.068 Multi-Domain Subsystem: Not Supported 00:13:39.068 Fixed Capacity Management: Not Supported 00:13:39.068 Variable Capacity Management: Not Supported 00:13:39.068 Delete Endurance Group: Not Supported 00:13:39.068 Delete NVM Set: Not Supported 00:13:39.068 Extended LBA Formats Supported: Not Supported 00:13:39.068 Flexible Data Placement Supported: Not Supported 00:13:39.068 00:13:39.068 Controller Memory Buffer Support 00:13:39.068 ================================ 00:13:39.068 Supported: No 00:13:39.068 00:13:39.068 Persistent Memory Region Support 00:13:39.068 ================================ 00:13:39.068 Supported: No 00:13:39.068 00:13:39.068 Admin Command Set Attributes 00:13:39.068 ============================ 00:13:39.068 Security Send/Receive: Not Supported 00:13:39.068 Format NVM: Not Supported 00:13:39.068 Firmware Activate/Download: Not Supported 00:13:39.068 Namespace Management: Not Supported 00:13:39.068 Device Self-Test: Not Supported 00:13:39.068 Directives: Not Supported 00:13:39.068 NVMe-MI: Not Supported 00:13:39.068 Virtualization Management: Not Supported 00:13:39.068 Doorbell Buffer Config: Not Supported 00:13:39.068 Get LBA Status Capability: Not Supported 00:13:39.068 Command & Feature Lockdown Capability: Not Supported 00:13:39.068 Abort Command Limit: 4 00:13:39.068 Async Event Request Limit: 4 00:13:39.068 Number of Firmware Slots: N/A 00:13:39.068 Firmware Slot 1 Read-Only: N/A 00:13:39.068 Firmware Activation Without Reset: N/A 00:13:39.068 Multiple Update Detection Support: N/A 00:13:39.068 Firmware Update Granularity: No Information Provided 00:13:39.068 Per-Namespace SMART Log: No 00:13:39.068 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.068 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:39.068 Command Effects Log Page: Supported 00:13:39.068 Get Log Page Extended Data: Supported 00:13:39.068 Telemetry Log Pages: Not Supported 00:13:39.068 Persistent Event Log Pages: Not Supported 00:13:39.068 Supported Log Pages Log Page: May Support 00:13:39.068 Commands Supported & Effects Log Page: Not Supported 00:13:39.068 Feature Identifiers & Effects Log Page:May Support 00:13:39.068 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.068 Data Area 4 for Telemetry Log: Not Supported 00:13:39.068 Error Log Page Entries Supported: 128 00:13:39.068 Keep Alive: Supported 00:13:39.068 Keep Alive Granularity: 10000 ms 00:13:39.068 00:13:39.068 NVM Command Set Attributes 00:13:39.068 ========================== 00:13:39.068 Submission Queue Entry Size 00:13:39.068 Max: 64 00:13:39.068 Min: 64 00:13:39.068 Completion Queue Entry Size 00:13:39.068 Max: 16 00:13:39.068 Min: 16 00:13:39.068 Number of Namespaces: 32 00:13:39.068 Compare Command: Supported 00:13:39.068 Write Uncorrectable Command: Not Supported 00:13:39.068 Dataset Management Command: Supported 00:13:39.068 Write Zeroes Command: Supported 00:13:39.068 Set Features Save Field: Not Supported 00:13:39.068 Reservations: Not Supported 00:13:39.069 Timestamp: Not Supported 00:13:39.069 Copy: Supported 00:13:39.069 Volatile Write Cache: Present 00:13:39.069 Atomic Write Unit (Normal): 1 00:13:39.069 Atomic Write Unit (PFail): 1 00:13:39.069 Atomic Compare & Write Unit: 1 00:13:39.069 Fused Compare & Write: Supported 00:13:39.069 Scatter-Gather List 00:13:39.069 SGL Command Set: Supported (Dword aligned) 00:13:39.069 SGL Keyed: Not Supported 00:13:39.069 SGL Bit Bucket Descriptor: Not Supported 00:13:39.069 SGL Metadata Pointer: Not Supported 00:13:39.069 Oversized SGL: Not Supported 00:13:39.069 SGL Metadata Address: Not Supported 00:13:39.069 SGL Offset: Not Supported 00:13:39.069 Transport SGL Data Block: Not Supported 00:13:39.069 Replay Protected Memory Block: Not Supported 00:13:39.069 00:13:39.069 Firmware Slot Information 00:13:39.069 ========================= 00:13:39.069 Active slot: 1 00:13:39.069 Slot 1 Firmware Revision: 25.01 00:13:39.069 00:13:39.069 00:13:39.069 Commands Supported and Effects 00:13:39.069 ============================== 00:13:39.069 Admin Commands 00:13:39.069 -------------- 00:13:39.069 Get Log Page (02h): Supported 00:13:39.069 Identify (06h): Supported 00:13:39.069 Abort (08h): Supported 00:13:39.069 Set Features (09h): Supported 00:13:39.069 Get Features (0Ah): Supported 00:13:39.069 Asynchronous Event Request (0Ch): Supported 00:13:39.069 Keep Alive (18h): Supported 00:13:39.069 I/O Commands 00:13:39.069 ------------ 00:13:39.069 Flush (00h): Supported LBA-Change 00:13:39.069 Write (01h): Supported LBA-Change 00:13:39.069 Read (02h): Supported 00:13:39.069 Compare (05h): Supported 00:13:39.069 Write Zeroes (08h): Supported LBA-Change 00:13:39.069 Dataset Management (09h): Supported LBA-Change 00:13:39.069 Copy (19h): Supported LBA-Change 00:13:39.069 00:13:39.069 Error Log 00:13:39.069 ========= 00:13:39.069 00:13:39.069 Arbitration 00:13:39.069 =========== 00:13:39.069 Arbitration Burst: 1 00:13:39.069 00:13:39.069 Power Management 00:13:39.069 ================ 00:13:39.069 Number of Power States: 1 00:13:39.069 Current Power State: Power State #0 00:13:39.069 Power State #0: 00:13:39.069 Max Power: 0.00 W 00:13:39.069 Non-Operational State: Operational 00:13:39.069 Entry Latency: Not Reported 00:13:39.069 Exit Latency: Not Reported 00:13:39.069 Relative Read Throughput: 0 00:13:39.069 Relative Read Latency: 0 00:13:39.069 Relative Write Throughput: 0 00:13:39.069 Relative Write Latency: 0 00:13:39.069 Idle Power: Not Reported 00:13:39.069 Active Power: Not Reported 00:13:39.069 Non-Operational Permissive Mode: Not Supported 00:13:39.069 00:13:39.069 Health Information 00:13:39.069 ================== 00:13:39.069 Critical Warnings: 00:13:39.069 Available Spare Space: OK 00:13:39.069 Temperature: OK 00:13:39.069 Device Reliability: OK 00:13:39.069 Read Only: No 00:13:39.069 Volatile Memory Backup: OK 00:13:39.069 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:39.069 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:39.069 Available Spare: 0% 00:13:39.069 Available Sp[2024-11-20 18:51:01.385355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:39.326 [2024-11-20 18:51:01.393212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:39.326 [2024-11-20 18:51:01.393252] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:39.326 [2024-11-20 18:51:01.393261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.326 [2024-11-20 18:51:01.393268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.326 [2024-11-20 18:51:01.393273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.326 [2024-11-20 18:51:01.393279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.326 [2024-11-20 18:51:01.393335] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:39.326 [2024-11-20 18:51:01.393348] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:39.326 [2024-11-20 18:51:01.394339] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.327 [2024-11-20 18:51:01.394384] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:39.327 [2024-11-20 18:51:01.394391] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:39.327 [2024-11-20 18:51:01.395338] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:39.327 [2024-11-20 18:51:01.395349] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:39.327 [2024-11-20 18:51:01.395396] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:39.327 [2024-11-20 18:51:01.398209] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:39.327 are Threshold: 0% 00:13:39.327 Life Percentage Used: 0% 00:13:39.327 Data Units Read: 0 00:13:39.327 Data Units Written: 0 00:13:39.327 Host Read Commands: 0 00:13:39.327 Host Write Commands: 0 00:13:39.327 Controller Busy Time: 0 minutes 00:13:39.327 Power Cycles: 0 00:13:39.327 Power On Hours: 0 hours 00:13:39.327 Unsafe Shutdowns: 0 00:13:39.327 Unrecoverable Media Errors: 0 00:13:39.327 Lifetime Error Log Entries: 0 00:13:39.327 Warning Temperature Time: 0 minutes 00:13:39.327 Critical Temperature Time: 0 minutes 00:13:39.327 00:13:39.327 Number of Queues 00:13:39.327 ================ 00:13:39.327 Number of I/O Submission Queues: 127 00:13:39.327 Number of I/O Completion Queues: 127 00:13:39.327 00:13:39.327 Active Namespaces 00:13:39.327 ================= 00:13:39.327 Namespace ID:1 00:13:39.327 Error Recovery Timeout: Unlimited 00:13:39.327 Command Set Identifier: NVM (00h) 00:13:39.327 Deallocate: Supported 00:13:39.327 Deallocated/Unwritten Error: Not Supported 00:13:39.327 Deallocated Read Value: Unknown 00:13:39.327 Deallocate in Write Zeroes: Not Supported 00:13:39.327 Deallocated Guard Field: 0xFFFF 00:13:39.327 Flush: Supported 00:13:39.327 Reservation: Supported 00:13:39.327 Namespace Sharing Capabilities: Multiple Controllers 00:13:39.327 Size (in LBAs): 131072 (0GiB) 00:13:39.327 Capacity (in LBAs): 131072 (0GiB) 00:13:39.327 Utilization (in LBAs): 131072 (0GiB) 00:13:39.327 NGUID: 79DA801586FB4E9DBADEBEED3AE3AF53 00:13:39.327 UUID: 79da8015-86fb-4e9d-bade-beed3ae3af53 00:13:39.327 Thin Provisioning: Not Supported 00:13:39.327 Per-NS Atomic Units: Yes 00:13:39.327 Atomic Boundary Size (Normal): 0 00:13:39.327 Atomic Boundary Size (PFail): 0 00:13:39.327 Atomic Boundary Offset: 0 00:13:39.327 Maximum Single Source Range Length: 65535 00:13:39.327 Maximum Copy Length: 65535 00:13:39.327 Maximum Source Range Count: 1 00:13:39.327 NGUID/EUI64 Never Reused: No 00:13:39.327 Namespace Write Protected: No 00:13:39.327 Number of LBA Formats: 1 00:13:39.327 Current LBA Format: LBA Format #00 00:13:39.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.327 00:13:39.327 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:39.327 [2024-11-20 18:51:01.625388] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.581 Initializing NVMe Controllers 00:13:44.581 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:44.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:44.581 Initialization complete. Launching workers. 00:13:44.581 ======================================================== 00:13:44.581 Latency(us) 00:13:44.581 Device Information : IOPS MiB/s Average min max 00:13:44.581 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39951.14 156.06 3203.75 929.41 7114.02 00:13:44.581 ======================================================== 00:13:44.581 Total : 39951.14 156.06 3203.75 929.41 7114.02 00:13:44.581 00:13:44.581 [2024-11-20 18:51:06.733454] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:44.581 18:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:44.838 [2024-11-20 18:51:06.972150] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.093 Initializing NVMe Controllers 00:13:50.093 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.093 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:50.093 Initialization complete. Launching workers. 00:13:50.093 ======================================================== 00:13:50.093 Latency(us) 00:13:50.093 Device Information : IOPS MiB/s Average min max 00:13:50.093 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39810.17 155.51 3215.07 964.58 10619.80 00:13:50.093 ======================================================== 00:13:50.093 Total : 39810.17 155.51 3215.07 964.58 10619.80 00:13:50.093 00:13:50.093 [2024-11-20 18:51:11.990469] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.093 18:51:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:50.093 [2024-11-20 18:51:12.204730] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.350 [2024-11-20 18:51:17.345301] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.350 Initializing NVMe Controllers 00:13:55.350 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:55.350 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:55.350 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:55.350 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:55.350 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:55.350 Initialization complete. Launching workers. 00:13:55.350 Starting thread on core 2 00:13:55.350 Starting thread on core 3 00:13:55.350 Starting thread on core 1 00:13:55.350 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:55.350 [2024-11-20 18:51:17.644642] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:58.627 [2024-11-20 18:51:20.701451] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:58.627 Initializing NVMe Controllers 00:13:58.627 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.627 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.627 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:58.627 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:58.627 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:58.627 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:58.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:58.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:58.627 Initialization complete. Launching workers. 00:13:58.627 Starting thread on core 1 with urgent priority queue 00:13:58.627 Starting thread on core 2 with urgent priority queue 00:13:58.627 Starting thread on core 3 with urgent priority queue 00:13:58.627 Starting thread on core 0 with urgent priority queue 00:13:58.627 SPDK bdev Controller (SPDK2 ) core 0: 9079.00 IO/s 11.01 secs/100000 ios 00:13:58.627 SPDK bdev Controller (SPDK2 ) core 1: 8396.33 IO/s 11.91 secs/100000 ios 00:13:58.627 SPDK bdev Controller (SPDK2 ) core 2: 7113.33 IO/s 14.06 secs/100000 ios 00:13:58.627 SPDK bdev Controller (SPDK2 ) core 3: 8853.33 IO/s 11.30 secs/100000 ios 00:13:58.627 ======================================================== 00:13:58.627 00:13:58.627 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:58.885 [2024-11-20 18:51:20.994663] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:58.885 Initializing NVMe Controllers 00:13:58.885 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.885 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.885 Namespace ID: 1 size: 0GB 00:13:58.885 Initialization complete. 00:13:58.885 INFO: using host memory buffer for IO 00:13:58.885 Hello world! 00:13:58.885 [2024-11-20 18:51:21.004734] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:58.885 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:59.142 [2024-11-20 18:51:21.289929] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.075 Initializing NVMe Controllers 00:14:00.075 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.075 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.075 Initialization complete. Launching workers. 00:14:00.075 submit (in ns) avg, min, max = 5067.7, 3158.1, 4001404.8 00:14:00.075 complete (in ns) avg, min, max = 20535.9, 1708.6, 4074834.3 00:14:00.075 00:14:00.075 Submit histogram 00:14:00.075 ================ 00:14:00.075 Range in us Cumulative Count 00:14:00.075 3.154 - 3.170: 0.0120% ( 2) 00:14:00.075 3.170 - 3.185: 0.0361% ( 4) 00:14:00.075 3.185 - 3.200: 0.1144% ( 13) 00:14:00.075 3.200 - 3.215: 0.6264% ( 85) 00:14:00.075 3.215 - 3.230: 2.7587% ( 354) 00:14:00.075 3.230 - 3.246: 7.8725% ( 849) 00:14:00.075 3.246 - 3.261: 13.4321% ( 923) 00:14:00.075 3.261 - 3.276: 19.7928% ( 1056) 00:14:00.075 3.276 - 3.291: 26.5510% ( 1122) 00:14:00.075 3.291 - 3.307: 32.7912% ( 1036) 00:14:00.075 3.307 - 3.322: 38.5556% ( 957) 00:14:00.075 3.322 - 3.337: 44.6392% ( 1010) 00:14:00.075 3.337 - 3.352: 50.0241% ( 894) 00:14:00.075 3.352 - 3.368: 54.8368% ( 799) 00:14:00.075 3.368 - 3.383: 61.3661% ( 1084) 00:14:00.075 3.383 - 3.398: 69.5579% ( 1360) 00:14:00.075 3.398 - 3.413: 74.7621% ( 864) 00:14:00.075 3.413 - 3.429: 79.6169% ( 806) 00:14:00.075 3.429 - 3.444: 82.7129% ( 514) 00:14:00.075 3.444 - 3.459: 85.1825% ( 410) 00:14:00.075 3.459 - 3.474: 86.4173% ( 205) 00:14:00.075 3.474 - 3.490: 87.1401% ( 120) 00:14:00.075 3.490 - 3.505: 87.5678% ( 71) 00:14:00.075 3.505 - 3.520: 88.0436% ( 79) 00:14:00.075 3.520 - 3.535: 88.5616% ( 86) 00:14:00.075 3.535 - 3.550: 89.3386% ( 129) 00:14:00.075 3.550 - 3.566: 90.3144% ( 162) 00:14:00.075 3.566 - 3.581: 91.2601% ( 157) 00:14:00.075 3.581 - 3.596: 92.3202% ( 176) 00:14:00.075 3.596 - 3.611: 93.2598% ( 156) 00:14:00.075 3.611 - 3.627: 94.1694% ( 151) 00:14:00.075 3.627 - 3.642: 95.1211% ( 158) 00:14:00.075 3.642 - 3.657: 96.0426% ( 153) 00:14:00.075 3.657 - 3.672: 96.9160% ( 145) 00:14:00.075 3.672 - 3.688: 97.5846% ( 111) 00:14:00.075 3.688 - 3.703: 98.0966% ( 85) 00:14:00.075 3.703 - 3.718: 98.5062% ( 68) 00:14:00.075 3.718 - 3.733: 98.8556% ( 58) 00:14:00.075 3.733 - 3.749: 99.0664% ( 35) 00:14:00.075 3.749 - 3.764: 99.1989% ( 22) 00:14:00.075 3.764 - 3.779: 99.3435% ( 24) 00:14:00.075 3.779 - 3.794: 99.4458% ( 17) 00:14:00.075 3.794 - 3.810: 99.5121% ( 11) 00:14:00.075 3.810 - 3.825: 99.5302% ( 3) 00:14:00.075 3.825 - 3.840: 99.5482% ( 3) 00:14:00.075 3.840 - 3.855: 99.5603% ( 2) 00:14:00.075 3.870 - 3.886: 99.5663% ( 1) 00:14:00.075 3.901 - 3.931: 99.5784% ( 2) 00:14:00.075 3.931 - 3.962: 99.5844% ( 1) 00:14:00.075 3.992 - 4.023: 99.5904% ( 1) 00:14:00.075 4.023 - 4.053: 99.6025% ( 2) 00:14:00.075 4.053 - 4.084: 99.6145% ( 2) 00:14:00.075 4.084 - 4.114: 99.6205% ( 1) 00:14:00.075 4.175 - 4.206: 99.6266% ( 1) 00:14:00.075 4.724 - 4.754: 99.6326% ( 1) 00:14:00.075 5.150 - 5.181: 99.6386% ( 1) 00:14:00.076 5.242 - 5.272: 99.6506% ( 2) 00:14:00.076 5.272 - 5.303: 99.6567% ( 1) 00:14:00.076 5.303 - 5.333: 99.6687% ( 2) 00:14:00.076 5.364 - 5.394: 99.6808% ( 2) 00:14:00.076 5.486 - 5.516: 99.6928% ( 2) 00:14:00.076 5.577 - 5.608: 99.6988% ( 1) 00:14:00.076 5.912 - 5.943: 99.7049% ( 1) 00:14:00.076 6.248 - 6.278: 99.7109% ( 1) 00:14:00.076 6.339 - 6.370: 99.7169% ( 1) 00:14:00.076 6.370 - 6.400: 99.7229% ( 1) 00:14:00.076 6.430 - 6.461: 99.7289% ( 1) 00:14:00.076 6.461 - 6.491: 99.7350% ( 1) 00:14:00.076 6.522 - 6.552: 99.7470% ( 2) 00:14:00.076 6.583 - 6.613: 99.7530% ( 1) 00:14:00.076 6.613 - 6.644: 99.7591% ( 1) 00:14:00.076 6.735 - 6.766: 99.7651% ( 1) 00:14:00.076 6.796 - 6.827: 99.7711% ( 1) 00:14:00.076 6.827 - 6.857: 99.7771% ( 1) 00:14:00.076 6.888 - 6.918: 99.7892% ( 2) 00:14:00.076 6.918 - 6.949: 99.7952% ( 1) 00:14:00.076 7.010 - 7.040: 99.8012% ( 1) 00:14:00.076 7.040 - 7.070: 99.8073% ( 1) 00:14:00.076 7.101 - 7.131: 99.8133% ( 1) 00:14:00.076 7.131 - 7.162: 99.8193% ( 1) 00:14:00.076 7.192 - 7.223: 99.8253% ( 1) 00:14:00.076 7.284 - 7.314: 99.8313% ( 1) 00:14:00.076 7.314 - 7.345: 99.8374% ( 1) 00:14:00.076 7.345 - 7.375: 99.8434% ( 1) 00:14:00.076 7.406 - 7.436: 99.8494% ( 1) 00:14:00.076 7.436 - 7.467: 99.8675% ( 3) 00:14:00.076 7.497 - 7.528: 99.8735% ( 1) 00:14:00.076 7.558 - 7.589: 99.8795% ( 1) 00:14:00.076 7.589 - 7.619: 99.8856% ( 1) 00:14:00.076 7.619 - 7.650: 99.8916% ( 1) 00:14:00.076 7.650 - 7.680: 99.8976% ( 1) 00:14:00.076 7.680 - 7.710: 99.9036% ( 1) 00:14:00.076 7.802 - 7.863: 99.9157% ( 2) 00:14:00.076 7.863 - 7.924: 99.9217% ( 1) 00:14:00.076 8.046 - 8.107: 99.9277% ( 1) 00:14:00.076 8.107 - 8.168: 99.9337% ( 1) 00:14:00.076 8.290 - 8.350: 99.9398% ( 1) 00:14:00.076 8.716 - 8.777: 99.9458% ( 1) 00:14:00.076 8.960 - 9.021: 99.9518% ( 1) 00:14:00.076 11.825 - 11.886: 99.9578% ( 1) 00:14:00.076 3994.575 - 4025.783: 100.0000% ( 7) 00:14:00.076 00:14:00.076 Complete histogram 00:14:00.076 ================== 00:14:00.076 Range in us Cumulative Count 00:14:00.076 1.707 - 1.714: 0.0241% ( 4) 00:14:00.076 1.714 - 1.722: 0.1205% ( 16) 00:14:00.076 1.722 - 1.730: 0.3554% ( 39) 00:14:00.076 1.730 - 1.737: 0.4457% ( 15) 00:14:00.076 1.737 - 1.745: 0.4698% ( 4) 00:14:00.076 1.745 - 1.752: 0.4879% ( 3) 00:14:00.076 1.752 - 1.760: 0.8734% ( 64) 00:14:00.076 1.760 - 1.768: 6.8245% ( 988) 00:14:00.076 1.768 - 1.775: 30.3578% ( 3907) 00:14:00.076 1.775 - 1.783: 55.0114% ( 4093) 00:14:00.076 1.783 - 1.790: 63.6791% ( 1439) 00:14:00.076 1.790 - 1.798: 66.9377% ( 541) 00:14:00.076 1.798 - 1.806: 69.0579% ( 352) 00:14:00.076 1.806 - 1.813: 70.5337% ( 245) 00:14:00.076 1.813 - 1.821: 74.6838% ( 689) 00:14:00.076 1.821 - 1.829: 84.0200% ( 1550) 00:14:00.076 1.829 - 1.836: 91.0914% ( 1174) 00:14:00.076 1.836 - 1.844: 94.4465% ( 557) 00:14:00.076 1.844 - 1.851: 96.3498% ( 316) 00:14:00.076 1.851 - 1.859: 97.3859% ( 172) 00:14:00.076 1.859 - 1.867: 97.8798% ( 82) 00:14:00.076 1.867 - 1.874: 98.1207% ( 40) 00:14:00.076 1.874 - 1.882: 98.2773% ( 26) 00:14:00.076 1.882 - 1.890: 98.4279% ( 25) 00:14:00.076 1.890 - 1.897: 98.5544% ( 21) 00:14:00.076 1.897 - 1.905: 98.7110% ( 26) 00:14:00.076 1.905 - 1.912: 98.9037% ( 32) 00:14:00.076 1.912 - 1.920: 98.9881% ( 14) 00:14:00.076 1.920 - 1.928: 99.0543% ( 11) 00:14:00.076 1.928 - 1.935: 99.0664% ( 2) 00:14:00.076 1.935 - 1.943: 99.0844% ( 3) 00:14:00.076 1.943 - 1.950: 99.0905% ( 1) 00:14:00.076 1.950 - 1.966: 99.1025% ( 2) 00:14:00.076 1.966 - 1.981: 99.1146% ( 2) 00:14:00.076 1.981 - 1.996: 99.1266% ( 2) 00:14:00.076 1.996 - 2.011: 99.1326% ( 1) 00:14:00.076 2.011 - 2.027: 99.1387% ( 1) 00:14:00.076 2.027 - 2.042: 99.1507% ( 2) 00:14:00.076 2.042 - 2.057: 99.1567% ( 1) 00:14:00.076 2.057 - 2.072: 99.1868% ( 5) 00:14:00.076 2.072 - 2.088: 99.2049% ( 3) 00:14:00.076 2.103 - 2.118: 99.2109% ( 1) 00:14:00.076 2.118 - 2.133: 99.2170% ( 1) 00:14:00.076 2.133 - 2.149: 99.2230% ( 1) 00:14:00.076 2.179 - 2.194: 99.2290% ( 1) 00:14:00.076 2.225 - 2.240: 99.2350% ( 1) 00:14:00.076 2.240 - 2.255: 99.2531% ( 3) 00:14:00.076 2.301 - 2.316: 99.2591% ( 1) 00:14:00.076 2.331 - 2.347: 99.2651% ( 1) 00:14:00.076 2.499 - 2.514: 99.2712% ( 1) 00:14:00.076 3.749 - 3.764: 99.2772% ( 1) 00:14:00.076 3.840 - 3.855: 99.2892% ( 2) 00:14:00.076 3.931 - 3.962: 99.2953% ( 1) 00:14:00.076 3.992 - 4.023: 99.3013% ( 1) 00:14:00.076 4.084 - 4.114: 99.3194% ( 3) 00:14:00.076 4.328 - 4.358: 99.3254% ( 1) 00:14:00.076 4.358 - 4.389: 99.3314% ( 1) 00:14:00.076 4.389 - 4.419: 99.3435% ( 2) 00:14:00.076 4.510 - 4.541: 99.3555% ( 2) 00:14:00.076 4.876 - 4.907: 99.3615% ( 1) 00:14:00.076 4.937 - 4.968: 99.3675% ( 1) 00:14:00.076 4.968 - 4.998: 99.3736% ( 1) 00:14:00.076 4.998 - 5.029: 99.3796% ( 1) 00:14:00.076 5.181 - 5.2[2024-11-20 18:51:22.383242] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.334 11: 99.3856% ( 1) 00:14:00.334 5.211 - 5.242: 99.3916% ( 1) 00:14:00.334 5.272 - 5.303: 99.3977% ( 1) 00:14:00.334 5.333 - 5.364: 99.4037% ( 1) 00:14:00.334 5.425 - 5.455: 99.4097% ( 1) 00:14:00.334 5.486 - 5.516: 99.4157% ( 1) 00:14:00.334 5.547 - 5.577: 99.4218% ( 1) 00:14:00.334 5.973 - 6.004: 99.4278% ( 1) 00:14:00.334 6.095 - 6.126: 99.4338% ( 1) 00:14:00.334 6.156 - 6.187: 99.4458% ( 2) 00:14:00.334 6.370 - 6.400: 99.4519% ( 1) 00:14:00.334 6.400 - 6.430: 99.4579% ( 1) 00:14:00.334 6.583 - 6.613: 99.4639% ( 1) 00:14:00.334 6.796 - 6.827: 99.4699% ( 1) 00:14:00.334 6.918 - 6.949: 99.4760% ( 1) 00:14:00.334 7.010 - 7.040: 99.4820% ( 1) 00:14:00.334 7.314 - 7.345: 99.4880% ( 1) 00:14:00.334 7.345 - 7.375: 99.4940% ( 1) 00:14:00.334 7.650 - 7.680: 99.5001% ( 1) 00:14:00.334 7.863 - 7.924: 99.5061% ( 1) 00:14:00.334 7.985 - 8.046: 99.5121% ( 1) 00:14:00.334 8.655 - 8.716: 99.5181% ( 1) 00:14:00.334 11.886 - 11.947: 99.5242% ( 1) 00:14:00.334 14.872 - 14.933: 99.5302% ( 1) 00:14:00.334 3261.196 - 3276.800: 99.5362% ( 1) 00:14:00.334 3994.575 - 4025.783: 99.9940% ( 76) 00:14:00.334 4056.990 - 4088.198: 100.0000% ( 1) 00:14:00.334 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:00.334 [ 00:14:00.334 { 00:14:00.334 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:00.334 "subtype": "Discovery", 00:14:00.334 "listen_addresses": [], 00:14:00.334 "allow_any_host": true, 00:14:00.334 "hosts": [] 00:14:00.334 }, 00:14:00.334 { 00:14:00.334 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:00.334 "subtype": "NVMe", 00:14:00.334 "listen_addresses": [ 00:14:00.334 { 00:14:00.334 "trtype": "VFIOUSER", 00:14:00.334 "adrfam": "IPv4", 00:14:00.334 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:00.334 "trsvcid": "0" 00:14:00.334 } 00:14:00.334 ], 00:14:00.334 "allow_any_host": true, 00:14:00.334 "hosts": [], 00:14:00.334 "serial_number": "SPDK1", 00:14:00.334 "model_number": "SPDK bdev Controller", 00:14:00.334 "max_namespaces": 32, 00:14:00.334 "min_cntlid": 1, 00:14:00.334 "max_cntlid": 65519, 00:14:00.334 "namespaces": [ 00:14:00.334 { 00:14:00.334 "nsid": 1, 00:14:00.334 "bdev_name": "Malloc1", 00:14:00.334 "name": "Malloc1", 00:14:00.334 "nguid": "3EE7317596FB4D9780C11AFC865AB887", 00:14:00.334 "uuid": "3ee73175-96fb-4d97-80c1-1afc865ab887" 00:14:00.334 }, 00:14:00.334 { 00:14:00.334 "nsid": 2, 00:14:00.334 "bdev_name": "Malloc3", 00:14:00.334 "name": "Malloc3", 00:14:00.334 "nguid": "10225D7C6A72411B99D5B21F582F2C67", 00:14:00.334 "uuid": "10225d7c-6a72-411b-99d5-b21f582f2c67" 00:14:00.334 } 00:14:00.334 ] 00:14:00.334 }, 00:14:00.334 { 00:14:00.334 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:00.334 "subtype": "NVMe", 00:14:00.334 "listen_addresses": [ 00:14:00.334 { 00:14:00.334 "trtype": "VFIOUSER", 00:14:00.334 "adrfam": "IPv4", 00:14:00.334 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:00.334 "trsvcid": "0" 00:14:00.334 } 00:14:00.334 ], 00:14:00.334 "allow_any_host": true, 00:14:00.334 "hosts": [], 00:14:00.334 "serial_number": "SPDK2", 00:14:00.334 "model_number": "SPDK bdev Controller", 00:14:00.334 "max_namespaces": 32, 00:14:00.334 "min_cntlid": 1, 00:14:00.334 "max_cntlid": 65519, 00:14:00.334 "namespaces": [ 00:14:00.334 { 00:14:00.334 "nsid": 1, 00:14:00.334 "bdev_name": "Malloc2", 00:14:00.334 "name": "Malloc2", 00:14:00.334 "nguid": "79DA801586FB4E9DBADEBEED3AE3AF53", 00:14:00.334 "uuid": "79da8015-86fb-4e9d-bade-beed3ae3af53" 00:14:00.334 } 00:14:00.334 ] 00:14:00.334 } 00:14:00.334 ] 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3612692 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:00.334 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:00.592 [2024-11-20 18:51:22.786610] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.592 Malloc4 00:14:00.592 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:00.849 [2024-11-20 18:51:23.014259] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.849 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:00.849 Asynchronous Event Request test 00:14:00.849 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.849 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.849 Registering asynchronous event callbacks... 00:14:00.849 Starting namespace attribute notice tests for all controllers... 00:14:00.850 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:00.850 aer_cb - Changed Namespace 00:14:00.850 Cleaning up... 00:14:01.107 [ 00:14:01.107 { 00:14:01.107 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:01.107 "subtype": "Discovery", 00:14:01.107 "listen_addresses": [], 00:14:01.107 "allow_any_host": true, 00:14:01.107 "hosts": [] 00:14:01.107 }, 00:14:01.107 { 00:14:01.107 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:01.107 "subtype": "NVMe", 00:14:01.107 "listen_addresses": [ 00:14:01.107 { 00:14:01.107 "trtype": "VFIOUSER", 00:14:01.107 "adrfam": "IPv4", 00:14:01.107 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:01.107 "trsvcid": "0" 00:14:01.107 } 00:14:01.107 ], 00:14:01.107 "allow_any_host": true, 00:14:01.107 "hosts": [], 00:14:01.107 "serial_number": "SPDK1", 00:14:01.107 "model_number": "SPDK bdev Controller", 00:14:01.107 "max_namespaces": 32, 00:14:01.107 "min_cntlid": 1, 00:14:01.107 "max_cntlid": 65519, 00:14:01.107 "namespaces": [ 00:14:01.107 { 00:14:01.107 "nsid": 1, 00:14:01.107 "bdev_name": "Malloc1", 00:14:01.107 "name": "Malloc1", 00:14:01.107 "nguid": "3EE7317596FB4D9780C11AFC865AB887", 00:14:01.107 "uuid": "3ee73175-96fb-4d97-80c1-1afc865ab887" 00:14:01.107 }, 00:14:01.107 { 00:14:01.107 "nsid": 2, 00:14:01.107 "bdev_name": "Malloc3", 00:14:01.107 "name": "Malloc3", 00:14:01.107 "nguid": "10225D7C6A72411B99D5B21F582F2C67", 00:14:01.107 "uuid": "10225d7c-6a72-411b-99d5-b21f582f2c67" 00:14:01.107 } 00:14:01.107 ] 00:14:01.107 }, 00:14:01.107 { 00:14:01.107 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:01.107 "subtype": "NVMe", 00:14:01.107 "listen_addresses": [ 00:14:01.107 { 00:14:01.107 "trtype": "VFIOUSER", 00:14:01.107 "adrfam": "IPv4", 00:14:01.107 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:01.107 "trsvcid": "0" 00:14:01.107 } 00:14:01.107 ], 00:14:01.107 "allow_any_host": true, 00:14:01.107 "hosts": [], 00:14:01.107 "serial_number": "SPDK2", 00:14:01.107 "model_number": "SPDK bdev Controller", 00:14:01.107 "max_namespaces": 32, 00:14:01.107 "min_cntlid": 1, 00:14:01.107 "max_cntlid": 65519, 00:14:01.107 "namespaces": [ 00:14:01.107 { 00:14:01.107 "nsid": 1, 00:14:01.107 "bdev_name": "Malloc2", 00:14:01.107 "name": "Malloc2", 00:14:01.107 "nguid": "79DA801586FB4E9DBADEBEED3AE3AF53", 00:14:01.107 "uuid": "79da8015-86fb-4e9d-bade-beed3ae3af53" 00:14:01.107 }, 00:14:01.107 { 00:14:01.107 "nsid": 2, 00:14:01.107 "bdev_name": "Malloc4", 00:14:01.107 "name": "Malloc4", 00:14:01.107 "nguid": "BF795C939B8D4DE3AD5F25D27CE89ACE", 00:14:01.107 "uuid": "bf795c93-9b8d-4de3-ad5f-25d27ce89ace" 00:14:01.107 } 00:14:01.107 ] 00:14:01.107 } 00:14:01.107 ] 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3612692 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3604555 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3604555 ']' 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3604555 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3604555 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3604555' 00:14:01.107 killing process with pid 3604555 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3604555 00:14:01.107 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3604555 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3612900 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3612900' 00:14:01.366 Process pid: 3612900 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3612900 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3612900 ']' 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.366 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:01.366 [2024-11-20 18:51:23.582710] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:01.366 [2024-11-20 18:51:23.583591] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:14:01.366 [2024-11-20 18:51:23.583634] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.366 [2024-11-20 18:51:23.661944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.625 [2024-11-20 18:51:23.702539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.625 [2024-11-20 18:51:23.702575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.625 [2024-11-20 18:51:23.702581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.625 [2024-11-20 18:51:23.702587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.625 [2024-11-20 18:51:23.702592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.625 [2024-11-20 18:51:23.703936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.625 [2024-11-20 18:51:23.704072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.625 [2024-11-20 18:51:23.704179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.625 [2024-11-20 18:51:23.704180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.625 [2024-11-20 18:51:23.771965] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:01.625 [2024-11-20 18:51:23.772975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:01.625 [2024-11-20 18:51:23.773182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:01.625 [2024-11-20 18:51:23.773443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:01.625 [2024-11-20 18:51:23.773506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:01.625 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.625 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:01.625 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:02.561 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:02.820 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:02.820 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:02.820 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:02.820 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:02.820 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:03.080 Malloc1 00:14:03.080 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:03.338 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:03.595 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:03.595 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:03.595 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:03.595 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:03.852 Malloc2 00:14:03.852 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:04.110 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3612900 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3612900 ']' 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3612900 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.367 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3612900 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3612900' 00:14:04.626 killing process with pid 3612900 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3612900 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3612900 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:04.626 00:14:04.626 real 0m50.857s 00:14:04.626 user 3m16.698s 00:14:04.626 sys 0m3.219s 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.626 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:04.626 ************************************ 00:14:04.626 END TEST nvmf_vfio_user 00:14:04.626 ************************************ 00:14:04.886 18:51:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:04.886 18:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:04.886 18:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.886 18:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.886 ************************************ 00:14:04.886 START TEST nvmf_vfio_user_nvme_compliance 00:14:04.886 ************************************ 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:04.886 * Looking for test storage... 00:14:04.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:04.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.886 --rc genhtml_branch_coverage=1 00:14:04.886 --rc genhtml_function_coverage=1 00:14:04.886 --rc genhtml_legend=1 00:14:04.886 --rc geninfo_all_blocks=1 00:14:04.886 --rc geninfo_unexecuted_blocks=1 00:14:04.886 00:14:04.886 ' 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:04.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.886 --rc genhtml_branch_coverage=1 00:14:04.886 --rc genhtml_function_coverage=1 00:14:04.886 --rc genhtml_legend=1 00:14:04.886 --rc geninfo_all_blocks=1 00:14:04.886 --rc geninfo_unexecuted_blocks=1 00:14:04.886 00:14:04.886 ' 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:04.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.886 --rc genhtml_branch_coverage=1 00:14:04.886 --rc genhtml_function_coverage=1 00:14:04.886 --rc genhtml_legend=1 00:14:04.886 --rc geninfo_all_blocks=1 00:14:04.886 --rc geninfo_unexecuted_blocks=1 00:14:04.886 00:14:04.886 ' 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:04.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.886 --rc genhtml_branch_coverage=1 00:14:04.886 --rc genhtml_function_coverage=1 00:14:04.886 --rc genhtml_legend=1 00:14:04.886 --rc geninfo_all_blocks=1 00:14:04.886 --rc geninfo_unexecuted_blocks=1 00:14:04.886 00:14:04.886 ' 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:04.886 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3613477 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3613477' 00:14:04.887 Process pid: 3613477 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3613477 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3613477 ']' 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.887 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.145 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:05.145 [2024-11-20 18:51:27.254364] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:14:05.145 [2024-11-20 18:51:27.254410] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.145 [2024-11-20 18:51:27.325863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.145 [2024-11-20 18:51:27.366959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.145 [2024-11-20 18:51:27.366997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.145 [2024-11-20 18:51:27.367005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.145 [2024-11-20 18:51:27.367011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.145 [2024-11-20 18:51:27.367016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.145 [2024-11-20 18:51:27.368328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.145 [2024-11-20 18:51:27.368436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.145 [2024-11-20 18:51:27.368436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.145 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.145 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:05.145 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.517 malloc0 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.517 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:06.517 00:14:06.517 00:14:06.517 CUnit - A unit testing framework for C - Version 2.1-3 00:14:06.517 http://cunit.sourceforge.net/ 00:14:06.517 00:14:06.517 00:14:06.517 Suite: nvme_compliance 00:14:06.517 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 18:51:28.697725] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.517 [2024-11-20 18:51:28.699059] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:06.517 [2024-11-20 18:51:28.699077] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:06.517 [2024-11-20 18:51:28.699084] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:06.517 [2024-11-20 18:51:28.700745] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.517 passed 00:14:06.517 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 18:51:28.779270] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.517 [2024-11-20 18:51:28.782292] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.517 passed 00:14:06.775 Test: admin_identify_ns ...[2024-11-20 18:51:28.861339] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.775 [2024-11-20 18:51:28.923214] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:06.775 [2024-11-20 18:51:28.931222] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:06.775 [2024-11-20 18:51:28.952306] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.775 passed 00:14:06.775 Test: admin_get_features_mandatory_features ...[2024-11-20 18:51:29.026099] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.775 [2024-11-20 18:51:29.029117] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.775 passed 00:14:07.036 Test: admin_get_features_optional_features ...[2024-11-20 18:51:29.105610] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.036 [2024-11-20 18:51:29.108629] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.036 passed 00:14:07.036 Test: admin_set_features_number_of_queues ...[2024-11-20 18:51:29.183387] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.036 [2024-11-20 18:51:29.292284] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.036 passed 00:14:07.295 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 18:51:29.368046] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.295 [2024-11-20 18:51:29.371068] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.295 passed 00:14:07.295 Test: admin_get_log_page_with_lpo ...[2024-11-20 18:51:29.444628] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.295 [2024-11-20 18:51:29.512210] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:07.295 [2024-11-20 18:51:29.525272] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.295 passed 00:14:07.295 Test: fabric_property_get ...[2024-11-20 18:51:29.600694] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.295 [2024-11-20 18:51:29.601936] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:07.295 [2024-11-20 18:51:29.603716] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.552 passed 00:14:07.552 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 18:51:29.682235] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.552 [2024-11-20 18:51:29.683460] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:07.552 [2024-11-20 18:51:29.685256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.552 passed 00:14:07.552 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 18:51:29.760382] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.552 [2024-11-20 18:51:29.845211] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:07.552 [2024-11-20 18:51:29.861215] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:07.552 [2024-11-20 18:51:29.866283] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.811 passed 00:14:07.811 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 18:51:29.943093] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.811 [2024-11-20 18:51:29.944338] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:07.811 [2024-11-20 18:51:29.946115] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.811 passed 00:14:07.811 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 18:51:30.018862] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.811 [2024-11-20 18:51:30.095214] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:07.811 [2024-11-20 18:51:30.119211] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:07.811 [2024-11-20 18:51:30.124378] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.069 passed 00:14:08.069 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 18:51:30.200417] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.069 [2024-11-20 18:51:30.201646] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:08.069 [2024-11-20 18:51:30.201672] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:08.069 [2024-11-20 18:51:30.203436] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.069 passed 00:14:08.069 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 18:51:30.281304] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.069 [2024-11-20 18:51:30.375212] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:08.069 [2024-11-20 18:51:30.383233] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:08.069 [2024-11-20 18:51:30.391215] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:08.326 [2024-11-20 18:51:30.399218] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:08.326 [2024-11-20 18:51:30.428298] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.326 passed 00:14:08.326 Test: admin_create_io_sq_verify_pc ...[2024-11-20 18:51:30.501915] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.326 [2024-11-20 18:51:30.520218] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:08.326 [2024-11-20 18:51:30.538100] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.326 passed 00:14:08.326 Test: admin_create_io_qp_max_qps ...[2024-11-20 18:51:30.612604] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.698 [2024-11-20 18:51:31.707214] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:09.955 [2024-11-20 18:51:32.087206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.955 passed 00:14:09.955 Test: admin_create_io_sq_shared_cq ...[2024-11-20 18:51:32.160998] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.214 [2024-11-20 18:51:32.292211] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:10.214 [2024-11-20 18:51:32.329270] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.214 passed 00:14:10.214 00:14:10.214 Run Summary: Type Total Ran Passed Failed Inactive 00:14:10.214 suites 1 1 n/a 0 0 00:14:10.214 tests 18 18 18 0 0 00:14:10.214 asserts 360 360 360 0 n/a 00:14:10.214 00:14:10.214 Elapsed time = 1.490 seconds 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3613477 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3613477 ']' 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3613477 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3613477 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3613477' 00:14:10.214 killing process with pid 3613477 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3613477 00:14:10.214 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3613477 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:10.472 00:14:10.472 real 0m5.608s 00:14:10.472 user 0m15.654s 00:14:10.472 sys 0m0.513s 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:10.472 ************************************ 00:14:10.472 END TEST nvmf_vfio_user_nvme_compliance 00:14:10.472 ************************************ 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.472 ************************************ 00:14:10.472 START TEST nvmf_vfio_user_fuzz 00:14:10.472 ************************************ 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:10.472 * Looking for test storage... 00:14:10.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:10.472 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.731 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:10.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.731 --rc genhtml_branch_coverage=1 00:14:10.731 --rc genhtml_function_coverage=1 00:14:10.731 --rc genhtml_legend=1 00:14:10.731 --rc geninfo_all_blocks=1 00:14:10.732 --rc geninfo_unexecuted_blocks=1 00:14:10.732 00:14:10.732 ' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:10.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.732 --rc genhtml_branch_coverage=1 00:14:10.732 --rc genhtml_function_coverage=1 00:14:10.732 --rc genhtml_legend=1 00:14:10.732 --rc geninfo_all_blocks=1 00:14:10.732 --rc geninfo_unexecuted_blocks=1 00:14:10.732 00:14:10.732 ' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:10.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.732 --rc genhtml_branch_coverage=1 00:14:10.732 --rc genhtml_function_coverage=1 00:14:10.732 --rc genhtml_legend=1 00:14:10.732 --rc geninfo_all_blocks=1 00:14:10.732 --rc geninfo_unexecuted_blocks=1 00:14:10.732 00:14:10.732 ' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:10.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.732 --rc genhtml_branch_coverage=1 00:14:10.732 --rc genhtml_function_coverage=1 00:14:10.732 --rc genhtml_legend=1 00:14:10.732 --rc geninfo_all_blocks=1 00:14:10.732 --rc geninfo_unexecuted_blocks=1 00:14:10.732 00:14:10.732 ' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3614462 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3614462' 00:14:10.732 Process pid: 3614462 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3614462 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3614462 ']' 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.732 18:51:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:10.991 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.991 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:10.991 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 malloc0 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:11.923 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:44.058 Fuzzing completed. Shutting down the fuzz application 00:14:44.058 00:14:44.058 Dumping successful admin opcodes: 00:14:44.058 8, 9, 10, 24, 00:14:44.058 Dumping successful io opcodes: 00:14:44.058 0, 00:14:44.058 NS: 0x20000081ef00 I/O qp, Total commands completed: 1021369, total successful commands: 4016, random_seed: 1884071552 00:14:44.058 NS: 0x20000081ef00 admin qp, Total commands completed: 251367, total successful commands: 2031, random_seed: 230989120 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3614462 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3614462 ']' 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3614462 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3614462 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3614462' 00:14:44.058 killing process with pid 3614462 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3614462 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3614462 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:44.058 00:14:44.058 real 0m32.220s 00:14:44.058 user 0m28.855s 00:14:44.058 sys 0m32.445s 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:44.058 ************************************ 00:14:44.058 END TEST nvmf_vfio_user_fuzz 00:14:44.058 ************************************ 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.058 ************************************ 00:14:44.058 START TEST nvmf_auth_target 00:14:44.058 ************************************ 00:14:44.058 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:44.058 * Looking for test storage... 00:14:44.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:44.058 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:44.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.059 --rc genhtml_branch_coverage=1 00:14:44.059 --rc genhtml_function_coverage=1 00:14:44.059 --rc genhtml_legend=1 00:14:44.059 --rc geninfo_all_blocks=1 00:14:44.059 --rc geninfo_unexecuted_blocks=1 00:14:44.059 00:14:44.059 ' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:44.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.059 --rc genhtml_branch_coverage=1 00:14:44.059 --rc genhtml_function_coverage=1 00:14:44.059 --rc genhtml_legend=1 00:14:44.059 --rc geninfo_all_blocks=1 00:14:44.059 --rc geninfo_unexecuted_blocks=1 00:14:44.059 00:14:44.059 ' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:44.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.059 --rc genhtml_branch_coverage=1 00:14:44.059 --rc genhtml_function_coverage=1 00:14:44.059 --rc genhtml_legend=1 00:14:44.059 --rc geninfo_all_blocks=1 00:14:44.059 --rc geninfo_unexecuted_blocks=1 00:14:44.059 00:14:44.059 ' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:44.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.059 --rc genhtml_branch_coverage=1 00:14:44.059 --rc genhtml_function_coverage=1 00:14:44.059 --rc genhtml_legend=1 00:14:44.059 --rc geninfo_all_blocks=1 00:14:44.059 --rc geninfo_unexecuted_blocks=1 00:14:44.059 00:14:44.059 ' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:44.059 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:44.060 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:49.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:49.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.331 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:49.332 Found net devices under 0000:86:00.0: cvl_0_0 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:49.332 Found net devices under 0000:86:00.1: cvl_0_1 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.332 18:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:49.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:14:49.332 00:14:49.332 --- 10.0.0.2 ping statistics --- 00:14:49.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.332 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:14:49.332 00:14:49.332 --- 10.0.0.1 ping statistics --- 00:14:49.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.332 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3622978 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3622978 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3622978 ']' 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3623004 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4b09e484f2d0eecef3985813b5117a4f8782c1fdb8cadc0e 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pST 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4b09e484f2d0eecef3985813b5117a4f8782c1fdb8cadc0e 0 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4b09e484f2d0eecef3985813b5117a4f8782c1fdb8cadc0e 0 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:49.332 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4b09e484f2d0eecef3985813b5117a4f8782c1fdb8cadc0e 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pST 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pST 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.pST 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2f41dca97d538176871c7e5d4e7948938a179612277899952586d0eb2ca15040 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0j4 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2f41dca97d538176871c7e5d4e7948938a179612277899952586d0eb2ca15040 3 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2f41dca97d538176871c7e5d4e7948938a179612277899952586d0eb2ca15040 3 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2f41dca97d538176871c7e5d4e7948938a179612277899952586d0eb2ca15040 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0j4 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0j4 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0j4 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dc1cdf0149e1ee9e97360d81d869c15a 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Mzt 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dc1cdf0149e1ee9e97360d81d869c15a 1 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dc1cdf0149e1ee9e97360d81d869c15a 1 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dc1cdf0149e1ee9e97360d81d869c15a 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:49.333 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Mzt 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Mzt 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Mzt 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f60518a1823851e8ecd9687b82bb843d1526965e50c35ea6 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eHE 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f60518a1823851e8ecd9687b82bb843d1526965e50c35ea6 2 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f60518a1823851e8ecd9687b82bb843d1526965e50c35ea6 2 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f60518a1823851e8ecd9687b82bb843d1526965e50c35ea6 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eHE 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eHE 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.eHE 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d5d746a32704cf8d80197f6c0c581ddad46b8b7bab5574da 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.63q 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d5d746a32704cf8d80197f6c0c581ddad46b8b7bab5574da 2 00:14:49.593 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d5d746a32704cf8d80197f6c0c581ddad46b8b7bab5574da 2 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d5d746a32704cf8d80197f6c0c581ddad46b8b7bab5574da 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.63q 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.63q 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.63q 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=042740e88902c67f752a6fde04982221 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cfB 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 042740e88902c67f752a6fde04982221 1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 042740e88902c67f752a6fde04982221 1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=042740e88902c67f752a6fde04982221 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cfB 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cfB 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.cfB 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8a92b224664f00aec5fcaa10f6774f7534c953dc26b42ac848d917f1a03d27b1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rhG 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8a92b224664f00aec5fcaa10f6774f7534c953dc26b42ac848d917f1a03d27b1 3 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8a92b224664f00aec5fcaa10f6774f7534c953dc26b42ac848d917f1a03d27b1 3 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8a92b224664f00aec5fcaa10f6774f7534c953dc26b42ac848d917f1a03d27b1 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rhG 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rhG 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.rhG 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3622978 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3622978 ']' 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.594 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3623004 /var/tmp/host.sock 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3623004 ']' 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:49.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.853 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pST 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pST 00:14:50.112 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pST 00:14:50.370 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0j4 ]] 00:14:50.370 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0j4 00:14:50.370 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.370 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.370 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.370 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0j4 00:14:50.370 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0j4 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Mzt 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Mzt 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Mzt 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.eHE ]] 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eHE 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.629 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.887 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.887 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eHE 00:14:50.887 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eHE 00:14:50.887 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:50.887 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.63q 00:14:50.887 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.887 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.887 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.887 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.63q 00:14:50.887 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.63q 00:14:51.144 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.cfB ]] 00:14:51.144 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cfB 00:14:51.144 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.144 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.144 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.144 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cfB 00:14:51.145 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cfB 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rhG 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rhG 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rhG 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.402 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.661 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.920 00:14:51.920 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.920 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.920 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.178 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.178 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.178 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.178 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.178 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.178 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.178 { 00:14:52.178 "cntlid": 1, 00:14:52.178 "qid": 0, 00:14:52.178 "state": "enabled", 00:14:52.178 "thread": "nvmf_tgt_poll_group_000", 00:14:52.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:52.178 "listen_address": { 00:14:52.178 "trtype": "TCP", 00:14:52.179 "adrfam": "IPv4", 00:14:52.179 "traddr": "10.0.0.2", 00:14:52.179 "trsvcid": "4420" 00:14:52.179 }, 00:14:52.179 "peer_address": { 00:14:52.179 "trtype": "TCP", 00:14:52.179 "adrfam": "IPv4", 00:14:52.179 "traddr": "10.0.0.1", 00:14:52.179 "trsvcid": "47326" 00:14:52.179 }, 00:14:52.179 "auth": { 00:14:52.179 "state": "completed", 00:14:52.179 "digest": "sha256", 00:14:52.179 "dhgroup": "null" 00:14:52.179 } 00:14:52.179 } 00:14:52.179 ]' 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.179 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.437 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:14:52.437 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:53.004 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.263 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.522 00:14:53.522 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.522 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.522 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.780 { 00:14:53.780 "cntlid": 3, 00:14:53.780 "qid": 0, 00:14:53.780 "state": "enabled", 00:14:53.780 "thread": "nvmf_tgt_poll_group_000", 00:14:53.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:53.780 "listen_address": { 00:14:53.780 "trtype": "TCP", 00:14:53.780 "adrfam": "IPv4", 00:14:53.780 "traddr": "10.0.0.2", 00:14:53.780 "trsvcid": "4420" 00:14:53.780 }, 00:14:53.780 "peer_address": { 00:14:53.780 "trtype": "TCP", 00:14:53.780 "adrfam": "IPv4", 00:14:53.780 "traddr": "10.0.0.1", 00:14:53.780 "trsvcid": "47364" 00:14:53.780 }, 00:14:53.780 "auth": { 00:14:53.780 "state": "completed", 00:14:53.780 "digest": "sha256", 00:14:53.780 "dhgroup": "null" 00:14:53.780 } 00:14:53.780 } 00:14:53.780 ]' 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.780 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.780 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:53.780 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.780 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.780 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.780 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.039 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:14:54.039 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:54.606 18:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.865 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.124 00:14:55.124 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.124 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.124 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.383 { 00:14:55.383 "cntlid": 5, 00:14:55.383 "qid": 0, 00:14:55.383 "state": "enabled", 00:14:55.383 "thread": "nvmf_tgt_poll_group_000", 00:14:55.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:55.383 "listen_address": { 00:14:55.383 "trtype": "TCP", 00:14:55.383 "adrfam": "IPv4", 00:14:55.383 "traddr": "10.0.0.2", 00:14:55.383 "trsvcid": "4420" 00:14:55.383 }, 00:14:55.383 "peer_address": { 00:14:55.383 "trtype": "TCP", 00:14:55.383 "adrfam": "IPv4", 00:14:55.383 "traddr": "10.0.0.1", 00:14:55.383 "trsvcid": "47384" 00:14:55.383 }, 00:14:55.383 "auth": { 00:14:55.383 "state": "completed", 00:14:55.383 "digest": "sha256", 00:14:55.383 "dhgroup": "null" 00:14:55.383 } 00:14:55.383 } 00:14:55.383 ]' 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.383 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.642 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:14:55.642 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.209 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.467 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.725 00:14:56.725 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.725 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.725 18:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.983 { 00:14:56.983 "cntlid": 7, 00:14:56.983 "qid": 0, 00:14:56.983 "state": "enabled", 00:14:56.983 "thread": "nvmf_tgt_poll_group_000", 00:14:56.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:56.983 "listen_address": { 00:14:56.983 "trtype": "TCP", 00:14:56.983 "adrfam": "IPv4", 00:14:56.983 "traddr": "10.0.0.2", 00:14:56.983 "trsvcid": "4420" 00:14:56.983 }, 00:14:56.983 "peer_address": { 00:14:56.983 "trtype": "TCP", 00:14:56.983 "adrfam": "IPv4", 00:14:56.983 "traddr": "10.0.0.1", 00:14:56.983 "trsvcid": "47402" 00:14:56.983 }, 00:14:56.983 "auth": { 00:14:56.983 "state": "completed", 00:14:56.983 "digest": "sha256", 00:14:56.983 "dhgroup": "null" 00:14:56.983 } 00:14:56.983 } 00:14:56.983 ]' 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.983 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.241 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:14:57.241 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:57.808 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.067 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.326 00:14:58.326 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.326 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.326 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.585 { 00:14:58.585 "cntlid": 9, 00:14:58.585 "qid": 0, 00:14:58.585 "state": "enabled", 00:14:58.585 "thread": "nvmf_tgt_poll_group_000", 00:14:58.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:58.585 "listen_address": { 00:14:58.585 "trtype": "TCP", 00:14:58.585 "adrfam": "IPv4", 00:14:58.585 "traddr": "10.0.0.2", 00:14:58.585 "trsvcid": "4420" 00:14:58.585 }, 00:14:58.585 "peer_address": { 00:14:58.585 "trtype": "TCP", 00:14:58.585 "adrfam": "IPv4", 00:14:58.585 "traddr": "10.0.0.1", 00:14:58.585 "trsvcid": "54344" 00:14:58.585 }, 00:14:58.585 "auth": { 00:14:58.585 "state": "completed", 00:14:58.585 "digest": "sha256", 00:14:58.585 "dhgroup": "ffdhe2048" 00:14:58.585 } 00:14:58.585 } 00:14:58.585 ]' 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.585 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.843 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:14:58.843 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.408 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.666 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.924 00:14:59.924 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.924 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.924 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.924 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.924 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.924 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.924 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.182 { 00:15:00.182 "cntlid": 11, 00:15:00.182 "qid": 0, 00:15:00.182 "state": "enabled", 00:15:00.182 "thread": "nvmf_tgt_poll_group_000", 00:15:00.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:00.182 "listen_address": { 00:15:00.182 "trtype": "TCP", 00:15:00.182 "adrfam": "IPv4", 00:15:00.182 "traddr": "10.0.0.2", 00:15:00.182 "trsvcid": "4420" 00:15:00.182 }, 00:15:00.182 "peer_address": { 00:15:00.182 "trtype": "TCP", 00:15:00.182 "adrfam": "IPv4", 00:15:00.182 "traddr": "10.0.0.1", 00:15:00.182 "trsvcid": "54374" 00:15:00.182 }, 00:15:00.182 "auth": { 00:15:00.182 "state": "completed", 00:15:00.182 "digest": "sha256", 00:15:00.182 "dhgroup": "ffdhe2048" 00:15:00.182 } 00:15:00.182 } 00:15:00.182 ]' 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.182 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.440 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:00.440 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:01.006 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.265 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.525 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.525 { 00:15:01.525 "cntlid": 13, 00:15:01.525 "qid": 0, 00:15:01.525 "state": "enabled", 00:15:01.525 "thread": "nvmf_tgt_poll_group_000", 00:15:01.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:01.525 "listen_address": { 00:15:01.525 "trtype": "TCP", 00:15:01.525 "adrfam": "IPv4", 00:15:01.525 "traddr": "10.0.0.2", 00:15:01.525 "trsvcid": "4420" 00:15:01.525 }, 00:15:01.525 "peer_address": { 00:15:01.525 "trtype": "TCP", 00:15:01.525 "adrfam": "IPv4", 00:15:01.525 "traddr": "10.0.0.1", 00:15:01.525 "trsvcid": "54408" 00:15:01.525 }, 00:15:01.525 "auth": { 00:15:01.525 "state": "completed", 00:15:01.525 "digest": "sha256", 00:15:01.525 "dhgroup": "ffdhe2048" 00:15:01.525 } 00:15:01.525 } 00:15:01.525 ]' 00:15:01.525 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.784 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.784 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.784 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.784 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.784 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.784 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.784 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.042 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:02.042 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.610 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.869 00:15:02.869 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.869 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.869 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.128 { 00:15:03.128 "cntlid": 15, 00:15:03.128 "qid": 0, 00:15:03.128 "state": "enabled", 00:15:03.128 "thread": "nvmf_tgt_poll_group_000", 00:15:03.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:03.128 "listen_address": { 00:15:03.128 "trtype": "TCP", 00:15:03.128 "adrfam": "IPv4", 00:15:03.128 "traddr": "10.0.0.2", 00:15:03.128 "trsvcid": "4420" 00:15:03.128 }, 00:15:03.128 "peer_address": { 00:15:03.128 "trtype": "TCP", 00:15:03.128 "adrfam": "IPv4", 00:15:03.128 "traddr": "10.0.0.1", 00:15:03.128 "trsvcid": "54444" 00:15:03.128 }, 00:15:03.128 "auth": { 00:15:03.128 "state": "completed", 00:15:03.128 "digest": "sha256", 00:15:03.128 "dhgroup": "ffdhe2048" 00:15:03.128 } 00:15:03.128 } 00:15:03.128 ]' 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.128 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.386 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.386 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.386 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.386 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:03.386 18:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.954 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.213 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.472 00:15:04.472 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.472 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.472 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.731 { 00:15:04.731 "cntlid": 17, 00:15:04.731 "qid": 0, 00:15:04.731 "state": "enabled", 00:15:04.731 "thread": "nvmf_tgt_poll_group_000", 00:15:04.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:04.731 "listen_address": { 00:15:04.731 "trtype": "TCP", 00:15:04.731 "adrfam": "IPv4", 00:15:04.731 "traddr": "10.0.0.2", 00:15:04.731 "trsvcid": "4420" 00:15:04.731 }, 00:15:04.731 "peer_address": { 00:15:04.731 "trtype": "TCP", 00:15:04.731 "adrfam": "IPv4", 00:15:04.731 "traddr": "10.0.0.1", 00:15:04.731 "trsvcid": "54474" 00:15:04.731 }, 00:15:04.731 "auth": { 00:15:04.731 "state": "completed", 00:15:04.731 "digest": "sha256", 00:15:04.731 "dhgroup": "ffdhe3072" 00:15:04.731 } 00:15:04.731 } 00:15:04.731 ]' 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.731 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.731 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.731 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.731 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.731 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.731 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.990 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:04.990 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:05.558 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.817 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.818 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.076 00:15:06.076 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.077 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.077 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.335 { 00:15:06.335 "cntlid": 19, 00:15:06.335 "qid": 0, 00:15:06.335 "state": "enabled", 00:15:06.335 "thread": "nvmf_tgt_poll_group_000", 00:15:06.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:06.335 "listen_address": { 00:15:06.335 "trtype": "TCP", 00:15:06.335 "adrfam": "IPv4", 00:15:06.335 "traddr": "10.0.0.2", 00:15:06.335 "trsvcid": "4420" 00:15:06.335 }, 00:15:06.335 "peer_address": { 00:15:06.335 "trtype": "TCP", 00:15:06.335 "adrfam": "IPv4", 00:15:06.335 "traddr": "10.0.0.1", 00:15:06.335 "trsvcid": "54500" 00:15:06.335 }, 00:15:06.335 "auth": { 00:15:06.335 "state": "completed", 00:15:06.335 "digest": "sha256", 00:15:06.335 "dhgroup": "ffdhe3072" 00:15:06.335 } 00:15:06.335 } 00:15:06.335 ]' 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.335 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.594 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:06.594 18:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.161 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.420 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.421 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.680 00:15:07.680 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.680 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.680 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.938 { 00:15:07.938 "cntlid": 21, 00:15:07.938 "qid": 0, 00:15:07.938 "state": "enabled", 00:15:07.938 "thread": "nvmf_tgt_poll_group_000", 00:15:07.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:07.938 "listen_address": { 00:15:07.938 "trtype": "TCP", 00:15:07.938 "adrfam": "IPv4", 00:15:07.938 "traddr": "10.0.0.2", 00:15:07.938 "trsvcid": "4420" 00:15:07.938 }, 00:15:07.938 "peer_address": { 00:15:07.938 "trtype": "TCP", 00:15:07.938 "adrfam": "IPv4", 00:15:07.938 "traddr": "10.0.0.1", 00:15:07.938 "trsvcid": "48738" 00:15:07.938 }, 00:15:07.938 "auth": { 00:15:07.938 "state": "completed", 00:15:07.938 "digest": "sha256", 00:15:07.938 "dhgroup": "ffdhe3072" 00:15:07.938 } 00:15:07.938 } 00:15:07.938 ]' 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.938 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.939 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.939 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.939 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.939 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.939 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.939 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.198 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:08.198 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:08.767 18:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.767 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:08.767 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.767 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.767 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.767 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.767 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:08.767 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.026 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.285 00:15:09.285 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.285 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.285 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.545 { 00:15:09.545 "cntlid": 23, 00:15:09.545 "qid": 0, 00:15:09.545 "state": "enabled", 00:15:09.545 "thread": "nvmf_tgt_poll_group_000", 00:15:09.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:09.545 "listen_address": { 00:15:09.545 "trtype": "TCP", 00:15:09.545 "adrfam": "IPv4", 00:15:09.545 "traddr": "10.0.0.2", 00:15:09.545 "trsvcid": "4420" 00:15:09.545 }, 00:15:09.545 "peer_address": { 00:15:09.545 "trtype": "TCP", 00:15:09.545 "adrfam": "IPv4", 00:15:09.545 "traddr": "10.0.0.1", 00:15:09.545 "trsvcid": "48782" 00:15:09.545 }, 00:15:09.545 "auth": { 00:15:09.545 "state": "completed", 00:15:09.545 "digest": "sha256", 00:15:09.545 "dhgroup": "ffdhe3072" 00:15:09.545 } 00:15:09.545 } 00:15:09.545 ]' 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.545 18:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.805 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:09.805 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.373 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.632 18:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.891 00:15:10.891 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.891 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.891 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.149 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.149 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.150 { 00:15:11.150 "cntlid": 25, 00:15:11.150 "qid": 0, 00:15:11.150 "state": "enabled", 00:15:11.150 "thread": "nvmf_tgt_poll_group_000", 00:15:11.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:11.150 "listen_address": { 00:15:11.150 "trtype": "TCP", 00:15:11.150 "adrfam": "IPv4", 00:15:11.150 "traddr": "10.0.0.2", 00:15:11.150 "trsvcid": "4420" 00:15:11.150 }, 00:15:11.150 "peer_address": { 00:15:11.150 "trtype": "TCP", 00:15:11.150 "adrfam": "IPv4", 00:15:11.150 "traddr": "10.0.0.1", 00:15:11.150 "trsvcid": "48818" 00:15:11.150 }, 00:15:11.150 "auth": { 00:15:11.150 "state": "completed", 00:15:11.150 "digest": "sha256", 00:15:11.150 "dhgroup": "ffdhe4096" 00:15:11.150 } 00:15:11.150 } 00:15:11.150 ]' 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.150 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.409 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:11.409 18:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.977 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.236 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.495 00:15:12.495 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.495 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.495 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.753 { 00:15:12.753 "cntlid": 27, 00:15:12.753 "qid": 0, 00:15:12.753 "state": "enabled", 00:15:12.753 "thread": "nvmf_tgt_poll_group_000", 00:15:12.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:12.753 "listen_address": { 00:15:12.753 "trtype": "TCP", 00:15:12.753 "adrfam": "IPv4", 00:15:12.753 "traddr": "10.0.0.2", 00:15:12.753 "trsvcid": "4420" 00:15:12.753 }, 00:15:12.753 "peer_address": { 00:15:12.753 "trtype": "TCP", 00:15:12.753 "adrfam": "IPv4", 00:15:12.753 "traddr": "10.0.0.1", 00:15:12.753 "trsvcid": "48846" 00:15:12.753 }, 00:15:12.753 "auth": { 00:15:12.753 "state": "completed", 00:15:12.753 "digest": "sha256", 00:15:12.753 "dhgroup": "ffdhe4096" 00:15:12.753 } 00:15:12.753 } 00:15:12.753 ]' 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.753 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.753 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.753 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.753 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.012 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:13.012 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.579 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.838 18:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.838 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.838 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.838 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.838 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.098 00:15:14.098 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.098 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.098 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.357 { 00:15:14.357 "cntlid": 29, 00:15:14.357 "qid": 0, 00:15:14.357 "state": "enabled", 00:15:14.357 "thread": "nvmf_tgt_poll_group_000", 00:15:14.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:14.357 "listen_address": { 00:15:14.357 "trtype": "TCP", 00:15:14.357 "adrfam": "IPv4", 00:15:14.357 "traddr": "10.0.0.2", 00:15:14.357 "trsvcid": "4420" 00:15:14.357 }, 00:15:14.357 "peer_address": { 00:15:14.357 "trtype": "TCP", 00:15:14.357 "adrfam": "IPv4", 00:15:14.357 "traddr": "10.0.0.1", 00:15:14.357 "trsvcid": "48876" 00:15:14.357 }, 00:15:14.357 "auth": { 00:15:14.357 "state": "completed", 00:15:14.357 "digest": "sha256", 00:15:14.357 "dhgroup": "ffdhe4096" 00:15:14.357 } 00:15:14.357 } 00:15:14.357 ]' 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.357 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.616 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:14.616 18:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:15.182 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:15.440 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:15.440 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.440 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.441 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.699 00:15:15.699 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.699 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.699 18:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.959 { 00:15:15.959 "cntlid": 31, 00:15:15.959 "qid": 0, 00:15:15.959 "state": "enabled", 00:15:15.959 "thread": "nvmf_tgt_poll_group_000", 00:15:15.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:15.959 "listen_address": { 00:15:15.959 "trtype": "TCP", 00:15:15.959 "adrfam": "IPv4", 00:15:15.959 "traddr": "10.0.0.2", 00:15:15.959 "trsvcid": "4420" 00:15:15.959 }, 00:15:15.959 "peer_address": { 00:15:15.959 "trtype": "TCP", 00:15:15.959 "adrfam": "IPv4", 00:15:15.959 "traddr": "10.0.0.1", 00:15:15.959 "trsvcid": "48898" 00:15:15.959 }, 00:15:15.959 "auth": { 00:15:15.959 "state": "completed", 00:15:15.959 "digest": "sha256", 00:15:15.959 "dhgroup": "ffdhe4096" 00:15:15.959 } 00:15:15.959 } 00:15:15.959 ]' 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.959 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.218 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:16.218 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:16.786 18:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.048 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.309 00:15:17.309 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.309 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.310 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.568 { 00:15:17.568 "cntlid": 33, 00:15:17.568 "qid": 0, 00:15:17.568 "state": "enabled", 00:15:17.568 "thread": "nvmf_tgt_poll_group_000", 00:15:17.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:17.568 "listen_address": { 00:15:17.568 "trtype": "TCP", 00:15:17.568 "adrfam": "IPv4", 00:15:17.568 "traddr": "10.0.0.2", 00:15:17.568 "trsvcid": "4420" 00:15:17.568 }, 00:15:17.568 "peer_address": { 00:15:17.568 "trtype": "TCP", 00:15:17.568 "adrfam": "IPv4", 00:15:17.568 "traddr": "10.0.0.1", 00:15:17.568 "trsvcid": "48924" 00:15:17.568 }, 00:15:17.568 "auth": { 00:15:17.568 "state": "completed", 00:15:17.568 "digest": "sha256", 00:15:17.568 "dhgroup": "ffdhe6144" 00:15:17.568 } 00:15:17.568 } 00:15:17.568 ]' 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.568 18:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.828 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:17.828 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.395 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.655 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.914 00:15:18.914 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.914 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.914 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.173 { 00:15:19.173 "cntlid": 35, 00:15:19.173 "qid": 0, 00:15:19.173 "state": "enabled", 00:15:19.173 "thread": "nvmf_tgt_poll_group_000", 00:15:19.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:19.173 "listen_address": { 00:15:19.173 "trtype": "TCP", 00:15:19.173 "adrfam": "IPv4", 00:15:19.173 "traddr": "10.0.0.2", 00:15:19.173 "trsvcid": "4420" 00:15:19.173 }, 00:15:19.173 "peer_address": { 00:15:19.173 "trtype": "TCP", 00:15:19.173 "adrfam": "IPv4", 00:15:19.173 "traddr": "10.0.0.1", 00:15:19.173 "trsvcid": "37454" 00:15:19.173 }, 00:15:19.173 "auth": { 00:15:19.173 "state": "completed", 00:15:19.173 "digest": "sha256", 00:15:19.173 "dhgroup": "ffdhe6144" 00:15:19.173 } 00:15:19.173 } 00:15:19.173 ]' 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.173 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.432 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:19.432 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.432 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.432 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.432 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.691 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:19.691 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.258 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.827 00:15:20.827 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.827 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.827 18:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.827 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.827 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.827 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.827 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.827 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.827 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.827 { 00:15:20.827 "cntlid": 37, 00:15:20.827 "qid": 0, 00:15:20.827 "state": "enabled", 00:15:20.827 "thread": "nvmf_tgt_poll_group_000", 00:15:20.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:20.827 "listen_address": { 00:15:20.827 "trtype": "TCP", 00:15:20.827 "adrfam": "IPv4", 00:15:20.827 "traddr": "10.0.0.2", 00:15:20.827 "trsvcid": "4420" 00:15:20.827 }, 00:15:20.827 "peer_address": { 00:15:20.827 "trtype": "TCP", 00:15:20.827 "adrfam": "IPv4", 00:15:20.827 "traddr": "10.0.0.1", 00:15:20.827 "trsvcid": "37480" 00:15:20.827 }, 00:15:20.827 "auth": { 00:15:20.827 "state": "completed", 00:15:20.827 "digest": "sha256", 00:15:20.827 "dhgroup": "ffdhe6144" 00:15:20.827 } 00:15:20.827 } 00:15:20.827 ]' 00:15:20.827 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.100 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.100 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.100 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:21.100 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.100 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.100 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.100 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.383 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:21.383 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.963 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.530 00:15:22.530 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.530 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.530 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.530 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.530 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.530 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.530 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.531 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.531 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.531 { 00:15:22.531 "cntlid": 39, 00:15:22.531 "qid": 0, 00:15:22.531 "state": "enabled", 00:15:22.531 "thread": "nvmf_tgt_poll_group_000", 00:15:22.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:22.531 "listen_address": { 00:15:22.531 "trtype": "TCP", 00:15:22.531 "adrfam": "IPv4", 00:15:22.531 "traddr": "10.0.0.2", 00:15:22.531 "trsvcid": "4420" 00:15:22.531 }, 00:15:22.531 "peer_address": { 00:15:22.531 "trtype": "TCP", 00:15:22.531 "adrfam": "IPv4", 00:15:22.531 "traddr": "10.0.0.1", 00:15:22.531 "trsvcid": "37492" 00:15:22.531 }, 00:15:22.531 "auth": { 00:15:22.531 "state": "completed", 00:15:22.531 "digest": "sha256", 00:15:22.531 "dhgroup": "ffdhe6144" 00:15:22.531 } 00:15:22.531 } 00:15:22.531 ]' 00:15:22.531 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.789 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.789 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.789 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.789 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.790 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.790 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.790 18:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.048 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:23.048 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.615 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.616 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.616 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.616 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.616 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.616 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.183 00:15:24.183 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.183 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.183 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.443 { 00:15:24.443 "cntlid": 41, 00:15:24.443 "qid": 0, 00:15:24.443 "state": "enabled", 00:15:24.443 "thread": "nvmf_tgt_poll_group_000", 00:15:24.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:24.443 "listen_address": { 00:15:24.443 "trtype": "TCP", 00:15:24.443 "adrfam": "IPv4", 00:15:24.443 "traddr": "10.0.0.2", 00:15:24.443 "trsvcid": "4420" 00:15:24.443 }, 00:15:24.443 "peer_address": { 00:15:24.443 "trtype": "TCP", 00:15:24.443 "adrfam": "IPv4", 00:15:24.443 "traddr": "10.0.0.1", 00:15:24.443 "trsvcid": "37518" 00:15:24.443 }, 00:15:24.443 "auth": { 00:15:24.443 "state": "completed", 00:15:24.443 "digest": "sha256", 00:15:24.443 "dhgroup": "ffdhe8192" 00:15:24.443 } 00:15:24.443 } 00:15:24.443 ]' 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.443 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.702 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:24.702 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.269 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.528 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.096 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.096 { 00:15:26.096 "cntlid": 43, 00:15:26.096 "qid": 0, 00:15:26.096 "state": "enabled", 00:15:26.096 "thread": "nvmf_tgt_poll_group_000", 00:15:26.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:26.096 "listen_address": { 00:15:26.096 "trtype": "TCP", 00:15:26.096 "adrfam": "IPv4", 00:15:26.096 "traddr": "10.0.0.2", 00:15:26.096 "trsvcid": "4420" 00:15:26.096 }, 00:15:26.096 "peer_address": { 00:15:26.096 "trtype": "TCP", 00:15:26.096 "adrfam": "IPv4", 00:15:26.096 "traddr": "10.0.0.1", 00:15:26.096 "trsvcid": "37550" 00:15:26.096 }, 00:15:26.096 "auth": { 00:15:26.096 "state": "completed", 00:15:26.096 "digest": "sha256", 00:15:26.096 "dhgroup": "ffdhe8192" 00:15:26.096 } 00:15:26.096 } 00:15:26.096 ]' 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.096 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.355 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.355 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.355 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.355 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.355 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.615 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:26.615 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.183 18:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.751 00:15:27.751 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.751 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.751 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.010 { 00:15:28.010 "cntlid": 45, 00:15:28.010 "qid": 0, 00:15:28.010 "state": "enabled", 00:15:28.010 "thread": "nvmf_tgt_poll_group_000", 00:15:28.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:28.010 "listen_address": { 00:15:28.010 "trtype": "TCP", 00:15:28.010 "adrfam": "IPv4", 00:15:28.010 "traddr": "10.0.0.2", 00:15:28.010 "trsvcid": "4420" 00:15:28.010 }, 00:15:28.010 "peer_address": { 00:15:28.010 "trtype": "TCP", 00:15:28.010 "adrfam": "IPv4", 00:15:28.010 "traddr": "10.0.0.1", 00:15:28.010 "trsvcid": "37562" 00:15:28.010 }, 00:15:28.010 "auth": { 00:15:28.010 "state": "completed", 00:15:28.010 "digest": "sha256", 00:15:28.010 "dhgroup": "ffdhe8192" 00:15:28.010 } 00:15:28.010 } 00:15:28.010 ]' 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:28.010 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.269 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.269 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.269 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.269 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:28.269 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.837 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.097 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.665 00:15:29.665 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.665 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.665 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.925 { 00:15:29.925 "cntlid": 47, 00:15:29.925 "qid": 0, 00:15:29.925 "state": "enabled", 00:15:29.925 "thread": "nvmf_tgt_poll_group_000", 00:15:29.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:29.925 "listen_address": { 00:15:29.925 "trtype": "TCP", 00:15:29.925 "adrfam": "IPv4", 00:15:29.925 "traddr": "10.0.0.2", 00:15:29.925 "trsvcid": "4420" 00:15:29.925 }, 00:15:29.925 "peer_address": { 00:15:29.925 "trtype": "TCP", 00:15:29.925 "adrfam": "IPv4", 00:15:29.925 "traddr": "10.0.0.1", 00:15:29.925 "trsvcid": "56680" 00:15:29.925 }, 00:15:29.925 "auth": { 00:15:29.925 "state": "completed", 00:15:29.925 "digest": "sha256", 00:15:29.925 "dhgroup": "ffdhe8192" 00:15:29.925 } 00:15:29.925 } 00:15:29.925 ]' 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.925 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.184 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:30.184 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.752 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.010 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:31.010 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.011 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.269 00:15:31.269 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.269 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.269 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.269 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.270 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.270 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.270 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.270 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.270 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.270 { 00:15:31.270 "cntlid": 49, 00:15:31.270 "qid": 0, 00:15:31.270 "state": "enabled", 00:15:31.270 "thread": "nvmf_tgt_poll_group_000", 00:15:31.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:31.270 "listen_address": { 00:15:31.270 "trtype": "TCP", 00:15:31.270 "adrfam": "IPv4", 00:15:31.270 "traddr": "10.0.0.2", 00:15:31.270 "trsvcid": "4420" 00:15:31.270 }, 00:15:31.270 "peer_address": { 00:15:31.270 "trtype": "TCP", 00:15:31.270 "adrfam": "IPv4", 00:15:31.270 "traddr": "10.0.0.1", 00:15:31.270 "trsvcid": "56698" 00:15:31.270 }, 00:15:31.270 "auth": { 00:15:31.270 "state": "completed", 00:15:31.270 "digest": "sha384", 00:15:31.270 "dhgroup": "null" 00:15:31.270 } 00:15:31.270 } 00:15:31.270 ]' 00:15:31.270 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.528 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.528 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.528 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.528 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.528 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.528 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.528 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.787 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:31.787 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.355 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.614 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.614 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.614 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.614 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.614 00:15:32.872 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.873 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.873 18:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.873 { 00:15:32.873 "cntlid": 51, 00:15:32.873 "qid": 0, 00:15:32.873 "state": "enabled", 00:15:32.873 "thread": "nvmf_tgt_poll_group_000", 00:15:32.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:32.873 "listen_address": { 00:15:32.873 "trtype": "TCP", 00:15:32.873 "adrfam": "IPv4", 00:15:32.873 "traddr": "10.0.0.2", 00:15:32.873 "trsvcid": "4420" 00:15:32.873 }, 00:15:32.873 "peer_address": { 00:15:32.873 "trtype": "TCP", 00:15:32.873 "adrfam": "IPv4", 00:15:32.873 "traddr": "10.0.0.1", 00:15:32.873 "trsvcid": "56726" 00:15:32.873 }, 00:15:32.873 "auth": { 00:15:32.873 "state": "completed", 00:15:32.873 "digest": "sha384", 00:15:32.873 "dhgroup": "null" 00:15:32.873 } 00:15:32.873 } 00:15:32.873 ]' 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.873 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.131 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:33.131 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.131 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.131 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.131 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.390 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:33.390 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:33.958 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.959 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.959 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.959 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.959 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.959 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.959 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.959 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.218 00:15:34.218 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.218 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.218 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.477 { 00:15:34.477 "cntlid": 53, 00:15:34.477 "qid": 0, 00:15:34.477 "state": "enabled", 00:15:34.477 "thread": "nvmf_tgt_poll_group_000", 00:15:34.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:34.477 "listen_address": { 00:15:34.477 "trtype": "TCP", 00:15:34.477 "adrfam": "IPv4", 00:15:34.477 "traddr": "10.0.0.2", 00:15:34.477 "trsvcid": "4420" 00:15:34.477 }, 00:15:34.477 "peer_address": { 00:15:34.477 "trtype": "TCP", 00:15:34.477 "adrfam": "IPv4", 00:15:34.477 "traddr": "10.0.0.1", 00:15:34.477 "trsvcid": "56752" 00:15:34.477 }, 00:15:34.477 "auth": { 00:15:34.477 "state": "completed", 00:15:34.477 "digest": "sha384", 00:15:34.477 "dhgroup": "null" 00:15:34.477 } 00:15:34.477 } 00:15:34.477 ]' 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.477 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.736 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.736 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.736 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.736 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:34.737 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.304 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.563 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.822 00:15:35.822 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.822 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.822 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.082 { 00:15:36.082 "cntlid": 55, 00:15:36.082 "qid": 0, 00:15:36.082 "state": "enabled", 00:15:36.082 "thread": "nvmf_tgt_poll_group_000", 00:15:36.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:36.082 "listen_address": { 00:15:36.082 "trtype": "TCP", 00:15:36.082 "adrfam": "IPv4", 00:15:36.082 "traddr": "10.0.0.2", 00:15:36.082 "trsvcid": "4420" 00:15:36.082 }, 00:15:36.082 "peer_address": { 00:15:36.082 "trtype": "TCP", 00:15:36.082 "adrfam": "IPv4", 00:15:36.082 "traddr": "10.0.0.1", 00:15:36.082 "trsvcid": "56784" 00:15:36.082 }, 00:15:36.082 "auth": { 00:15:36.082 "state": "completed", 00:15:36.082 "digest": "sha384", 00:15:36.082 "dhgroup": "null" 00:15:36.082 } 00:15:36.082 } 00:15:36.082 ]' 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.082 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.341 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.341 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.342 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.342 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:36.342 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.909 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.168 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.169 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.169 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.428 00:15:37.428 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.428 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.428 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.687 { 00:15:37.687 "cntlid": 57, 00:15:37.687 "qid": 0, 00:15:37.687 "state": "enabled", 00:15:37.687 "thread": "nvmf_tgt_poll_group_000", 00:15:37.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:37.687 "listen_address": { 00:15:37.687 "trtype": "TCP", 00:15:37.687 "adrfam": "IPv4", 00:15:37.687 "traddr": "10.0.0.2", 00:15:37.687 "trsvcid": "4420" 00:15:37.687 }, 00:15:37.687 "peer_address": { 00:15:37.687 "trtype": "TCP", 00:15:37.687 "adrfam": "IPv4", 00:15:37.687 "traddr": "10.0.0.1", 00:15:37.687 "trsvcid": "56800" 00:15:37.687 }, 00:15:37.687 "auth": { 00:15:37.687 "state": "completed", 00:15:37.687 "digest": "sha384", 00:15:37.687 "dhgroup": "ffdhe2048" 00:15:37.687 } 00:15:37.687 } 00:15:37.687 ]' 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.687 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.688 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.688 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.688 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.688 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.688 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.947 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:37.947 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.514 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.773 18:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.033 00:15:39.033 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.033 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.033 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.292 { 00:15:39.292 "cntlid": 59, 00:15:39.292 "qid": 0, 00:15:39.292 "state": "enabled", 00:15:39.292 "thread": "nvmf_tgt_poll_group_000", 00:15:39.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:39.292 "listen_address": { 00:15:39.292 "trtype": "TCP", 00:15:39.292 "adrfam": "IPv4", 00:15:39.292 "traddr": "10.0.0.2", 00:15:39.292 "trsvcid": "4420" 00:15:39.292 }, 00:15:39.292 "peer_address": { 00:15:39.292 "trtype": "TCP", 00:15:39.292 "adrfam": "IPv4", 00:15:39.292 "traddr": "10.0.0.1", 00:15:39.292 "trsvcid": "37786" 00:15:39.292 }, 00:15:39.292 "auth": { 00:15:39.292 "state": "completed", 00:15:39.292 "digest": "sha384", 00:15:39.292 "dhgroup": "ffdhe2048" 00:15:39.292 } 00:15:39.292 } 00:15:39.292 ]' 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.292 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.551 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:39.551 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.120 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.379 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.638 00:15:40.638 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.638 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.638 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.898 { 00:15:40.898 "cntlid": 61, 00:15:40.898 "qid": 0, 00:15:40.898 "state": "enabled", 00:15:40.898 "thread": "nvmf_tgt_poll_group_000", 00:15:40.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:40.898 "listen_address": { 00:15:40.898 "trtype": "TCP", 00:15:40.898 "adrfam": "IPv4", 00:15:40.898 "traddr": "10.0.0.2", 00:15:40.898 "trsvcid": "4420" 00:15:40.898 }, 00:15:40.898 "peer_address": { 00:15:40.898 "trtype": "TCP", 00:15:40.898 "adrfam": "IPv4", 00:15:40.898 "traddr": "10.0.0.1", 00:15:40.898 "trsvcid": "37818" 00:15:40.898 }, 00:15:40.898 "auth": { 00:15:40.898 "state": "completed", 00:15:40.898 "digest": "sha384", 00:15:40.898 "dhgroup": "ffdhe2048" 00:15:40.898 } 00:15:40.898 } 00:15:40.898 ]' 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.898 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.156 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:41.157 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.724 18:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.983 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.242 00:15:42.242 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.242 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.242 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.501 { 00:15:42.501 "cntlid": 63, 00:15:42.501 "qid": 0, 00:15:42.501 "state": "enabled", 00:15:42.501 "thread": "nvmf_tgt_poll_group_000", 00:15:42.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:42.501 "listen_address": { 00:15:42.501 "trtype": "TCP", 00:15:42.501 "adrfam": "IPv4", 00:15:42.501 "traddr": "10.0.0.2", 00:15:42.501 "trsvcid": "4420" 00:15:42.501 }, 00:15:42.501 "peer_address": { 00:15:42.501 "trtype": "TCP", 00:15:42.501 "adrfam": "IPv4", 00:15:42.501 "traddr": "10.0.0.1", 00:15:42.501 "trsvcid": "37842" 00:15:42.501 }, 00:15:42.501 "auth": { 00:15:42.501 "state": "completed", 00:15:42.501 "digest": "sha384", 00:15:42.501 "dhgroup": "ffdhe2048" 00:15:42.501 } 00:15:42.501 } 00:15:42.501 ]' 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.501 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.761 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:42.761 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.330 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.589 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.848 00:15:43.848 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.848 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.848 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.108 { 00:15:44.108 "cntlid": 65, 00:15:44.108 "qid": 0, 00:15:44.108 "state": "enabled", 00:15:44.108 "thread": "nvmf_tgt_poll_group_000", 00:15:44.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:44.108 "listen_address": { 00:15:44.108 "trtype": "TCP", 00:15:44.108 "adrfam": "IPv4", 00:15:44.108 "traddr": "10.0.0.2", 00:15:44.108 "trsvcid": "4420" 00:15:44.108 }, 00:15:44.108 "peer_address": { 00:15:44.108 "trtype": "TCP", 00:15:44.108 "adrfam": "IPv4", 00:15:44.108 "traddr": "10.0.0.1", 00:15:44.108 "trsvcid": "37868" 00:15:44.108 }, 00:15:44.108 "auth": { 00:15:44.108 "state": "completed", 00:15:44.108 "digest": "sha384", 00:15:44.108 "dhgroup": "ffdhe3072" 00:15:44.108 } 00:15:44.108 } 00:15:44.108 ]' 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.108 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.367 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:44.367 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:44.934 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.193 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.194 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.194 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.194 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.452 00:15:45.452 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.452 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.452 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.711 { 00:15:45.711 "cntlid": 67, 00:15:45.711 "qid": 0, 00:15:45.711 "state": "enabled", 00:15:45.711 "thread": "nvmf_tgt_poll_group_000", 00:15:45.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:45.711 "listen_address": { 00:15:45.711 "trtype": "TCP", 00:15:45.711 "adrfam": "IPv4", 00:15:45.711 "traddr": "10.0.0.2", 00:15:45.711 "trsvcid": "4420" 00:15:45.711 }, 00:15:45.711 "peer_address": { 00:15:45.711 "trtype": "TCP", 00:15:45.711 "adrfam": "IPv4", 00:15:45.711 "traddr": "10.0.0.1", 00:15:45.711 "trsvcid": "37886" 00:15:45.711 }, 00:15:45.711 "auth": { 00:15:45.711 "state": "completed", 00:15:45.711 "digest": "sha384", 00:15:45.711 "dhgroup": "ffdhe3072" 00:15:45.711 } 00:15:45.711 } 00:15:45.711 ]' 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.711 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.969 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:45.969 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.536 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.794 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.795 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.795 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.795 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.795 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.052 00:15:47.052 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.052 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.052 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.310 { 00:15:47.310 "cntlid": 69, 00:15:47.310 "qid": 0, 00:15:47.310 "state": "enabled", 00:15:47.310 "thread": "nvmf_tgt_poll_group_000", 00:15:47.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:47.310 "listen_address": { 00:15:47.310 "trtype": "TCP", 00:15:47.310 "adrfam": "IPv4", 00:15:47.310 "traddr": "10.0.0.2", 00:15:47.310 "trsvcid": "4420" 00:15:47.310 }, 00:15:47.310 "peer_address": { 00:15:47.310 "trtype": "TCP", 00:15:47.310 "adrfam": "IPv4", 00:15:47.310 "traddr": "10.0.0.1", 00:15:47.310 "trsvcid": "37912" 00:15:47.310 }, 00:15:47.310 "auth": { 00:15:47.310 "state": "completed", 00:15:47.310 "digest": "sha384", 00:15:47.310 "dhgroup": "ffdhe3072" 00:15:47.310 } 00:15:47.310 } 00:15:47.310 ]' 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.310 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.570 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:47.570 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:48.135 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.135 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:48.135 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.136 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.136 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.136 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.136 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.136 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.394 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.653 00:15:48.653 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.653 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.653 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.653 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.653 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.653 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.653 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.911 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.911 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.911 { 00:15:48.911 "cntlid": 71, 00:15:48.911 "qid": 0, 00:15:48.911 "state": "enabled", 00:15:48.911 "thread": "nvmf_tgt_poll_group_000", 00:15:48.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:48.911 "listen_address": { 00:15:48.911 "trtype": "TCP", 00:15:48.911 "adrfam": "IPv4", 00:15:48.911 "traddr": "10.0.0.2", 00:15:48.911 "trsvcid": "4420" 00:15:48.911 }, 00:15:48.911 "peer_address": { 00:15:48.911 "trtype": "TCP", 00:15:48.911 "adrfam": "IPv4", 00:15:48.911 "traddr": "10.0.0.1", 00:15:48.911 "trsvcid": "60852" 00:15:48.911 }, 00:15:48.911 "auth": { 00:15:48.911 "state": "completed", 00:15:48.911 "digest": "sha384", 00:15:48.911 "dhgroup": "ffdhe3072" 00:15:48.911 } 00:15:48.911 } 00:15:48.911 ]' 00:15:48.911 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.911 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.911 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.911 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.911 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.911 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.911 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.911 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.171 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:49.171 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:49.738 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.739 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.998 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.257 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.257 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.257 { 00:15:50.257 "cntlid": 73, 00:15:50.257 "qid": 0, 00:15:50.257 "state": "enabled", 00:15:50.257 "thread": "nvmf_tgt_poll_group_000", 00:15:50.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:50.257 "listen_address": { 00:15:50.257 "trtype": "TCP", 00:15:50.257 "adrfam": "IPv4", 00:15:50.257 "traddr": "10.0.0.2", 00:15:50.257 "trsvcid": "4420" 00:15:50.257 }, 00:15:50.257 "peer_address": { 00:15:50.257 "trtype": "TCP", 00:15:50.257 "adrfam": "IPv4", 00:15:50.257 "traddr": "10.0.0.1", 00:15:50.257 "trsvcid": "60880" 00:15:50.257 }, 00:15:50.257 "auth": { 00:15:50.257 "state": "completed", 00:15:50.257 "digest": "sha384", 00:15:50.257 "dhgroup": "ffdhe4096" 00:15:50.257 } 00:15:50.257 } 00:15:50.257 ]' 00:15:50.515 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.515 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.515 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.515 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.515 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.515 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.516 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.516 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.774 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:50.774 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.342 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.601 00:15:51.859 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.859 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.860 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.860 { 00:15:51.860 "cntlid": 75, 00:15:51.860 "qid": 0, 00:15:51.860 "state": "enabled", 00:15:51.860 "thread": "nvmf_tgt_poll_group_000", 00:15:51.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:51.860 "listen_address": { 00:15:51.860 "trtype": "TCP", 00:15:51.860 "adrfam": "IPv4", 00:15:51.860 "traddr": "10.0.0.2", 00:15:51.860 "trsvcid": "4420" 00:15:51.860 }, 00:15:51.860 "peer_address": { 00:15:51.860 "trtype": "TCP", 00:15:51.860 "adrfam": "IPv4", 00:15:51.860 "traddr": "10.0.0.1", 00:15:51.860 "trsvcid": "60906" 00:15:51.860 }, 00:15:51.860 "auth": { 00:15:51.860 "state": "completed", 00:15:51.860 "digest": "sha384", 00:15:51.860 "dhgroup": "ffdhe4096" 00:15:51.860 } 00:15:51.860 } 00:15:51.860 ]' 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.860 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.119 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.119 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.119 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.119 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.119 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.378 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:52.378 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.947 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.206 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.465 { 00:15:53.465 "cntlid": 77, 00:15:53.465 "qid": 0, 00:15:53.465 "state": "enabled", 00:15:53.465 "thread": "nvmf_tgt_poll_group_000", 00:15:53.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:53.465 "listen_address": { 00:15:53.465 "trtype": "TCP", 00:15:53.465 "adrfam": "IPv4", 00:15:53.465 "traddr": "10.0.0.2", 00:15:53.465 "trsvcid": "4420" 00:15:53.465 }, 00:15:53.465 "peer_address": { 00:15:53.465 "trtype": "TCP", 00:15:53.465 "adrfam": "IPv4", 00:15:53.465 "traddr": "10.0.0.1", 00:15:53.465 "trsvcid": "60932" 00:15:53.465 }, 00:15:53.465 "auth": { 00:15:53.465 "state": "completed", 00:15:53.465 "digest": "sha384", 00:15:53.465 "dhgroup": "ffdhe4096" 00:15:53.465 } 00:15:53.465 } 00:15:53.465 ]' 00:15:53.465 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.724 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.724 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.724 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.724 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.724 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.724 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.724 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.983 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:53.983 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.554 18:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.814 00:15:54.814 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.814 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.814 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.073 { 00:15:55.073 "cntlid": 79, 00:15:55.073 "qid": 0, 00:15:55.073 "state": "enabled", 00:15:55.073 "thread": "nvmf_tgt_poll_group_000", 00:15:55.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:55.073 "listen_address": { 00:15:55.073 "trtype": "TCP", 00:15:55.073 "adrfam": "IPv4", 00:15:55.073 "traddr": "10.0.0.2", 00:15:55.073 "trsvcid": "4420" 00:15:55.073 }, 00:15:55.073 "peer_address": { 00:15:55.073 "trtype": "TCP", 00:15:55.073 "adrfam": "IPv4", 00:15:55.073 "traddr": "10.0.0.1", 00:15:55.073 "trsvcid": "60956" 00:15:55.073 }, 00:15:55.073 "auth": { 00:15:55.073 "state": "completed", 00:15:55.073 "digest": "sha384", 00:15:55.073 "dhgroup": "ffdhe4096" 00:15:55.073 } 00:15:55.073 } 00:15:55.073 ]' 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.073 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.332 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.332 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.332 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.332 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:55.332 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:55.900 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.158 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.726 00:15:56.726 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.726 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.726 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.726 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.726 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.726 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.726 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.726 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.726 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.726 { 00:15:56.726 "cntlid": 81, 00:15:56.726 "qid": 0, 00:15:56.726 "state": "enabled", 00:15:56.726 "thread": "nvmf_tgt_poll_group_000", 00:15:56.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:56.726 "listen_address": { 00:15:56.726 "trtype": "TCP", 00:15:56.726 "adrfam": "IPv4", 00:15:56.726 "traddr": "10.0.0.2", 00:15:56.726 "trsvcid": "4420" 00:15:56.726 }, 00:15:56.726 "peer_address": { 00:15:56.726 "trtype": "TCP", 00:15:56.726 "adrfam": "IPv4", 00:15:56.726 "traddr": "10.0.0.1", 00:15:56.726 "trsvcid": "60994" 00:15:56.726 }, 00:15:56.726 "auth": { 00:15:56.726 "state": "completed", 00:15:56.726 "digest": "sha384", 00:15:56.726 "dhgroup": "ffdhe6144" 00:15:56.726 } 00:15:56.726 } 00:15:56.726 ]' 00:15:56.726 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.985 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.985 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.985 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:56.985 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.985 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.985 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.985 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.245 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:57.245 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:57.813 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.072 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.332 00:15:58.332 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.332 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.332 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.591 { 00:15:58.591 "cntlid": 83, 00:15:58.591 "qid": 0, 00:15:58.591 "state": "enabled", 00:15:58.591 "thread": "nvmf_tgt_poll_group_000", 00:15:58.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:58.591 "listen_address": { 00:15:58.591 "trtype": "TCP", 00:15:58.591 "adrfam": "IPv4", 00:15:58.591 "traddr": "10.0.0.2", 00:15:58.591 "trsvcid": "4420" 00:15:58.591 }, 00:15:58.591 "peer_address": { 00:15:58.591 "trtype": "TCP", 00:15:58.591 "adrfam": "IPv4", 00:15:58.591 "traddr": "10.0.0.1", 00:15:58.591 "trsvcid": "41532" 00:15:58.591 }, 00:15:58.591 "auth": { 00:15:58.591 "state": "completed", 00:15:58.591 "digest": "sha384", 00:15:58.591 "dhgroup": "ffdhe6144" 00:15:58.591 } 00:15:58.591 } 00:15:58.591 ]' 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.591 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.853 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:58.853 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.459 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.759 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.760 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.760 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.019 00:16:00.019 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.019 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.019 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.278 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.278 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.278 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.278 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.278 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.278 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.278 { 00:16:00.278 "cntlid": 85, 00:16:00.278 "qid": 0, 00:16:00.278 "state": "enabled", 00:16:00.278 "thread": "nvmf_tgt_poll_group_000", 00:16:00.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:00.279 "listen_address": { 00:16:00.279 "trtype": "TCP", 00:16:00.279 "adrfam": "IPv4", 00:16:00.279 "traddr": "10.0.0.2", 00:16:00.279 "trsvcid": "4420" 00:16:00.279 }, 00:16:00.279 "peer_address": { 00:16:00.279 "trtype": "TCP", 00:16:00.279 "adrfam": "IPv4", 00:16:00.279 "traddr": "10.0.0.1", 00:16:00.279 "trsvcid": "41554" 00:16:00.279 }, 00:16:00.279 "auth": { 00:16:00.279 "state": "completed", 00:16:00.279 "digest": "sha384", 00:16:00.279 "dhgroup": "ffdhe6144" 00:16:00.279 } 00:16:00.279 } 00:16:00.279 ]' 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.279 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.538 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:00.538 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:01.105 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.105 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:01.105 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.105 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.105 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.105 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.105 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.106 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.365 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.624 00:16:01.624 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.624 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.624 18:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.883 { 00:16:01.883 "cntlid": 87, 00:16:01.883 "qid": 0, 00:16:01.883 "state": "enabled", 00:16:01.883 "thread": "nvmf_tgt_poll_group_000", 00:16:01.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:01.883 "listen_address": { 00:16:01.883 "trtype": "TCP", 00:16:01.883 "adrfam": "IPv4", 00:16:01.883 "traddr": "10.0.0.2", 00:16:01.883 "trsvcid": "4420" 00:16:01.883 }, 00:16:01.883 "peer_address": { 00:16:01.883 "trtype": "TCP", 00:16:01.883 "adrfam": "IPv4", 00:16:01.883 "traddr": "10.0.0.1", 00:16:01.883 "trsvcid": "41584" 00:16:01.883 }, 00:16:01.883 "auth": { 00:16:01.883 "state": "completed", 00:16:01.883 "digest": "sha384", 00:16:01.883 "dhgroup": "ffdhe6144" 00:16:01.883 } 00:16:01.883 } 00:16:01.883 ]' 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.883 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.141 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:02.141 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.710 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.969 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.536 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.536 { 00:16:03.536 "cntlid": 89, 00:16:03.536 "qid": 0, 00:16:03.536 "state": "enabled", 00:16:03.536 "thread": "nvmf_tgt_poll_group_000", 00:16:03.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:03.536 "listen_address": { 00:16:03.536 "trtype": "TCP", 00:16:03.536 "adrfam": "IPv4", 00:16:03.536 "traddr": "10.0.0.2", 00:16:03.536 "trsvcid": "4420" 00:16:03.536 }, 00:16:03.536 "peer_address": { 00:16:03.536 "trtype": "TCP", 00:16:03.536 "adrfam": "IPv4", 00:16:03.536 "traddr": "10.0.0.1", 00:16:03.536 "trsvcid": "41606" 00:16:03.536 }, 00:16:03.536 "auth": { 00:16:03.536 "state": "completed", 00:16:03.536 "digest": "sha384", 00:16:03.536 "dhgroup": "ffdhe8192" 00:16:03.536 } 00:16:03.536 } 00:16:03.536 ]' 00:16:03.536 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.795 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.795 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.795 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.795 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.795 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.795 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.795 18:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.053 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:04.053 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.619 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.620 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.620 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.185 00:16:05.185 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.185 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.185 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.443 { 00:16:05.443 "cntlid": 91, 00:16:05.443 "qid": 0, 00:16:05.443 "state": "enabled", 00:16:05.443 "thread": "nvmf_tgt_poll_group_000", 00:16:05.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:05.443 "listen_address": { 00:16:05.443 "trtype": "TCP", 00:16:05.443 "adrfam": "IPv4", 00:16:05.443 "traddr": "10.0.0.2", 00:16:05.443 "trsvcid": "4420" 00:16:05.443 }, 00:16:05.443 "peer_address": { 00:16:05.443 "trtype": "TCP", 00:16:05.443 "adrfam": "IPv4", 00:16:05.443 "traddr": "10.0.0.1", 00:16:05.443 "trsvcid": "41638" 00:16:05.443 }, 00:16:05.443 "auth": { 00:16:05.443 "state": "completed", 00:16:05.443 "digest": "sha384", 00:16:05.443 "dhgroup": "ffdhe8192" 00:16:05.443 } 00:16:05.443 } 00:16:05.443 ]' 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.443 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.702 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:05.702 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.270 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.529 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.530 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.099 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.099 { 00:16:07.099 "cntlid": 93, 00:16:07.099 "qid": 0, 00:16:07.099 "state": "enabled", 00:16:07.099 "thread": "nvmf_tgt_poll_group_000", 00:16:07.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:07.099 "listen_address": { 00:16:07.099 "trtype": "TCP", 00:16:07.099 "adrfam": "IPv4", 00:16:07.099 "traddr": "10.0.0.2", 00:16:07.099 "trsvcid": "4420" 00:16:07.099 }, 00:16:07.099 "peer_address": { 00:16:07.099 "trtype": "TCP", 00:16:07.099 "adrfam": "IPv4", 00:16:07.099 "traddr": "10.0.0.1", 00:16:07.099 "trsvcid": "41662" 00:16:07.099 }, 00:16:07.099 "auth": { 00:16:07.099 "state": "completed", 00:16:07.099 "digest": "sha384", 00:16:07.099 "dhgroup": "ffdhe8192" 00:16:07.099 } 00:16:07.099 } 00:16:07.099 ]' 00:16:07.099 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.358 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.358 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.358 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.358 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.358 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.358 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.358 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.616 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:07.616 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.184 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.752 00:16:08.752 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.752 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.752 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.011 { 00:16:09.011 "cntlid": 95, 00:16:09.011 "qid": 0, 00:16:09.011 "state": "enabled", 00:16:09.011 "thread": "nvmf_tgt_poll_group_000", 00:16:09.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.011 "listen_address": { 00:16:09.011 "trtype": "TCP", 00:16:09.011 "adrfam": "IPv4", 00:16:09.011 "traddr": "10.0.0.2", 00:16:09.011 "trsvcid": "4420" 00:16:09.011 }, 00:16:09.011 "peer_address": { 00:16:09.011 "trtype": "TCP", 00:16:09.011 "adrfam": "IPv4", 00:16:09.011 "traddr": "10.0.0.1", 00:16:09.011 "trsvcid": "46326" 00:16:09.011 }, 00:16:09.011 "auth": { 00:16:09.011 "state": "completed", 00:16:09.011 "digest": "sha384", 00:16:09.011 "dhgroup": "ffdhe8192" 00:16:09.011 } 00:16:09.011 } 00:16:09.011 ]' 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.011 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.270 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:09.270 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.837 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.095 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.353 00:16:10.353 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.353 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.353 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.612 { 00:16:10.612 "cntlid": 97, 00:16:10.612 "qid": 0, 00:16:10.612 "state": "enabled", 00:16:10.612 "thread": "nvmf_tgt_poll_group_000", 00:16:10.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:10.612 "listen_address": { 00:16:10.612 "trtype": "TCP", 00:16:10.612 "adrfam": "IPv4", 00:16:10.612 "traddr": "10.0.0.2", 00:16:10.612 "trsvcid": "4420" 00:16:10.612 }, 00:16:10.612 "peer_address": { 00:16:10.612 "trtype": "TCP", 00:16:10.612 "adrfam": "IPv4", 00:16:10.612 "traddr": "10.0.0.1", 00:16:10.612 "trsvcid": "46352" 00:16:10.612 }, 00:16:10.612 "auth": { 00:16:10.612 "state": "completed", 00:16:10.612 "digest": "sha512", 00:16:10.612 "dhgroup": "null" 00:16:10.612 } 00:16:10.612 } 00:16:10.612 ]' 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.612 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.872 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:10.872 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.439 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.698 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.956 00:16:11.956 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.956 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.956 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.214 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.214 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.214 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.214 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.214 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.214 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.214 { 00:16:12.214 "cntlid": 99, 00:16:12.214 "qid": 0, 00:16:12.214 "state": "enabled", 00:16:12.214 "thread": "nvmf_tgt_poll_group_000", 00:16:12.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.214 "listen_address": { 00:16:12.214 "trtype": "TCP", 00:16:12.214 "adrfam": "IPv4", 00:16:12.214 "traddr": "10.0.0.2", 00:16:12.214 "trsvcid": "4420" 00:16:12.214 }, 00:16:12.214 "peer_address": { 00:16:12.214 "trtype": "TCP", 00:16:12.214 "adrfam": "IPv4", 00:16:12.215 "traddr": "10.0.0.1", 00:16:12.215 "trsvcid": "46368" 00:16:12.215 }, 00:16:12.215 "auth": { 00:16:12.215 "state": "completed", 00:16:12.215 "digest": "sha512", 00:16:12.215 "dhgroup": "null" 00:16:12.215 } 00:16:12.215 } 00:16:12.215 ]' 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.215 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.473 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:12.473 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.040 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.297 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.555 00:16:13.555 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.555 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.555 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.813 { 00:16:13.813 "cntlid": 101, 00:16:13.813 "qid": 0, 00:16:13.813 "state": "enabled", 00:16:13.813 "thread": "nvmf_tgt_poll_group_000", 00:16:13.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:13.813 "listen_address": { 00:16:13.813 "trtype": "TCP", 00:16:13.813 "adrfam": "IPv4", 00:16:13.813 "traddr": "10.0.0.2", 00:16:13.813 "trsvcid": "4420" 00:16:13.813 }, 00:16:13.813 "peer_address": { 00:16:13.813 "trtype": "TCP", 00:16:13.813 "adrfam": "IPv4", 00:16:13.813 "traddr": "10.0.0.1", 00:16:13.813 "trsvcid": "46386" 00:16:13.813 }, 00:16:13.813 "auth": { 00:16:13.813 "state": "completed", 00:16:13.813 "digest": "sha512", 00:16:13.813 "dhgroup": "null" 00:16:13.813 } 00:16:13.813 } 00:16:13.813 ]' 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.813 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.813 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:13.813 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.813 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.813 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.813 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.071 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:14.071 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:14.640 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.899 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.159 00:16:15.159 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.159 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.159 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.159 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.159 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.159 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.159 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.418 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.418 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.418 { 00:16:15.419 "cntlid": 103, 00:16:15.419 "qid": 0, 00:16:15.419 "state": "enabled", 00:16:15.419 "thread": "nvmf_tgt_poll_group_000", 00:16:15.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.419 "listen_address": { 00:16:15.419 "trtype": "TCP", 00:16:15.419 "adrfam": "IPv4", 00:16:15.419 "traddr": "10.0.0.2", 00:16:15.419 "trsvcid": "4420" 00:16:15.419 }, 00:16:15.419 "peer_address": { 00:16:15.419 "trtype": "TCP", 00:16:15.419 "adrfam": "IPv4", 00:16:15.419 "traddr": "10.0.0.1", 00:16:15.419 "trsvcid": "46418" 00:16:15.419 }, 00:16:15.419 "auth": { 00:16:15.419 "state": "completed", 00:16:15.419 "digest": "sha512", 00:16:15.419 "dhgroup": "null" 00:16:15.419 } 00:16:15.419 } 00:16:15.419 ]' 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.419 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.677 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:15.677 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.245 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.504 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.504 00:16:16.763 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.763 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.763 18:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.763 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.763 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.763 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.763 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.763 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.763 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.763 { 00:16:16.763 "cntlid": 105, 00:16:16.763 "qid": 0, 00:16:16.763 "state": "enabled", 00:16:16.763 "thread": "nvmf_tgt_poll_group_000", 00:16:16.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:16.763 "listen_address": { 00:16:16.763 "trtype": "TCP", 00:16:16.763 "adrfam": "IPv4", 00:16:16.763 "traddr": "10.0.0.2", 00:16:16.763 "trsvcid": "4420" 00:16:16.763 }, 00:16:16.763 "peer_address": { 00:16:16.763 "trtype": "TCP", 00:16:16.763 "adrfam": "IPv4", 00:16:16.763 "traddr": "10.0.0.1", 00:16:16.763 "trsvcid": "46438" 00:16:16.763 }, 00:16:16.763 "auth": { 00:16:16.763 "state": "completed", 00:16:16.763 "digest": "sha512", 00:16:16.763 "dhgroup": "ffdhe2048" 00:16:16.763 } 00:16:16.763 } 00:16:16.763 ]' 00:16:16.763 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.026 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.026 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.026 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.026 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.026 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.026 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.026 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.286 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:17.286 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.854 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.854 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.112 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.112 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.112 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.112 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.112 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.371 { 00:16:18.371 "cntlid": 107, 00:16:18.371 "qid": 0, 00:16:18.371 "state": "enabled", 00:16:18.371 "thread": "nvmf_tgt_poll_group_000", 00:16:18.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:18.371 "listen_address": { 00:16:18.371 "trtype": "TCP", 00:16:18.371 "adrfam": "IPv4", 00:16:18.371 "traddr": "10.0.0.2", 00:16:18.371 "trsvcid": "4420" 00:16:18.371 }, 00:16:18.371 "peer_address": { 00:16:18.371 "trtype": "TCP", 00:16:18.371 "adrfam": "IPv4", 00:16:18.371 "traddr": "10.0.0.1", 00:16:18.371 "trsvcid": "58638" 00:16:18.371 }, 00:16:18.371 "auth": { 00:16:18.371 "state": "completed", 00:16:18.371 "digest": "sha512", 00:16:18.371 "dhgroup": "ffdhe2048" 00:16:18.371 } 00:16:18.371 } 00:16:18.371 ]' 00:16:18.371 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.630 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.630 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.630 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.630 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.630 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.630 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.630 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.888 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:18.888 18:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.456 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.714 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.973 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.973 { 00:16:19.973 "cntlid": 109, 00:16:19.973 "qid": 0, 00:16:19.973 "state": "enabled", 00:16:19.973 "thread": "nvmf_tgt_poll_group_000", 00:16:19.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:19.973 "listen_address": { 00:16:19.973 "trtype": "TCP", 00:16:19.973 "adrfam": "IPv4", 00:16:19.973 "traddr": "10.0.0.2", 00:16:19.973 "trsvcid": "4420" 00:16:19.973 }, 00:16:19.973 "peer_address": { 00:16:19.973 "trtype": "TCP", 00:16:19.973 "adrfam": "IPv4", 00:16:19.973 "traddr": "10.0.0.1", 00:16:19.973 "trsvcid": "58664" 00:16:19.973 }, 00:16:19.973 "auth": { 00:16:19.973 "state": "completed", 00:16:19.973 "digest": "sha512", 00:16:19.973 "dhgroup": "ffdhe2048" 00:16:19.973 } 00:16:19.973 } 00:16:19.973 ]' 00:16:19.974 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.232 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.232 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.232 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.232 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.232 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.232 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.232 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.491 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:20.491 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.059 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.318 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.318 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.318 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.318 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.318 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.576 { 00:16:21.576 "cntlid": 111, 00:16:21.576 "qid": 0, 00:16:21.576 "state": "enabled", 00:16:21.576 "thread": "nvmf_tgt_poll_group_000", 00:16:21.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:21.576 "listen_address": { 00:16:21.576 "trtype": "TCP", 00:16:21.576 "adrfam": "IPv4", 00:16:21.576 "traddr": "10.0.0.2", 00:16:21.576 "trsvcid": "4420" 00:16:21.576 }, 00:16:21.576 "peer_address": { 00:16:21.576 "trtype": "TCP", 00:16:21.576 "adrfam": "IPv4", 00:16:21.576 "traddr": "10.0.0.1", 00:16:21.576 "trsvcid": "58700" 00:16:21.576 }, 00:16:21.576 "auth": { 00:16:21.576 "state": "completed", 00:16:21.576 "digest": "sha512", 00:16:21.576 "dhgroup": "ffdhe2048" 00:16:21.576 } 00:16:21.576 } 00:16:21.576 ]' 00:16:21.576 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.835 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.835 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.835 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.836 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.836 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.836 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.836 18:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.094 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:22.094 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.662 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.663 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.663 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.663 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.663 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.663 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.663 18:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.922 00:16:22.922 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.922 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.922 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.181 { 00:16:23.181 "cntlid": 113, 00:16:23.181 "qid": 0, 00:16:23.181 "state": "enabled", 00:16:23.181 "thread": "nvmf_tgt_poll_group_000", 00:16:23.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.181 "listen_address": { 00:16:23.181 "trtype": "TCP", 00:16:23.181 "adrfam": "IPv4", 00:16:23.181 "traddr": "10.0.0.2", 00:16:23.181 "trsvcid": "4420" 00:16:23.181 }, 00:16:23.181 "peer_address": { 00:16:23.181 "trtype": "TCP", 00:16:23.181 "adrfam": "IPv4", 00:16:23.181 "traddr": "10.0.0.1", 00:16:23.181 "trsvcid": "58734" 00:16:23.181 }, 00:16:23.181 "auth": { 00:16:23.181 "state": "completed", 00:16:23.181 "digest": "sha512", 00:16:23.181 "dhgroup": "ffdhe3072" 00:16:23.181 } 00:16:23.181 } 00:16:23.181 ]' 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.181 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.439 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.439 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.439 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.439 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.439 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.697 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:23.698 18:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.266 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.525 00:16:24.525 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.525 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.525 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.784 { 00:16:24.784 "cntlid": 115, 00:16:24.784 "qid": 0, 00:16:24.784 "state": "enabled", 00:16:24.784 "thread": "nvmf_tgt_poll_group_000", 00:16:24.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:24.784 "listen_address": { 00:16:24.784 "trtype": "TCP", 00:16:24.784 "adrfam": "IPv4", 00:16:24.784 "traddr": "10.0.0.2", 00:16:24.784 "trsvcid": "4420" 00:16:24.784 }, 00:16:24.784 "peer_address": { 00:16:24.784 "trtype": "TCP", 00:16:24.784 "adrfam": "IPv4", 00:16:24.784 "traddr": "10.0.0.1", 00:16:24.784 "trsvcid": "58754" 00:16:24.784 }, 00:16:24.784 "auth": { 00:16:24.784 "state": "completed", 00:16:24.784 "digest": "sha512", 00:16:24.784 "dhgroup": "ffdhe3072" 00:16:24.784 } 00:16:24.784 } 00:16:24.784 ]' 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.784 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.043 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.043 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.043 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.043 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.043 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.043 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:25.043 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:25.610 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.870 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.870 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.870 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.870 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.870 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.870 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.870 18:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.870 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.129 00:16:26.129 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.129 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.129 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.388 { 00:16:26.388 "cntlid": 117, 00:16:26.388 "qid": 0, 00:16:26.388 "state": "enabled", 00:16:26.388 "thread": "nvmf_tgt_poll_group_000", 00:16:26.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.388 "listen_address": { 00:16:26.388 "trtype": "TCP", 00:16:26.388 "adrfam": "IPv4", 00:16:26.388 "traddr": "10.0.0.2", 00:16:26.388 "trsvcid": "4420" 00:16:26.388 }, 00:16:26.388 "peer_address": { 00:16:26.388 "trtype": "TCP", 00:16:26.388 "adrfam": "IPv4", 00:16:26.388 "traddr": "10.0.0.1", 00:16:26.388 "trsvcid": "58778" 00:16:26.388 }, 00:16:26.388 "auth": { 00:16:26.388 "state": "completed", 00:16:26.388 "digest": "sha512", 00:16:26.388 "dhgroup": "ffdhe3072" 00:16:26.388 } 00:16:26.388 } 00:16:26.388 ]' 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.388 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.646 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.646 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.646 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.646 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.646 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.904 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:26.904 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:27.471 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.472 18:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.731 00:16:27.731 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.731 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.731 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.990 { 00:16:27.990 "cntlid": 119, 00:16:27.990 "qid": 0, 00:16:27.990 "state": "enabled", 00:16:27.990 "thread": "nvmf_tgt_poll_group_000", 00:16:27.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:27.990 "listen_address": { 00:16:27.990 "trtype": "TCP", 00:16:27.990 "adrfam": "IPv4", 00:16:27.990 "traddr": "10.0.0.2", 00:16:27.990 "trsvcid": "4420" 00:16:27.990 }, 00:16:27.990 "peer_address": { 00:16:27.990 "trtype": "TCP", 00:16:27.990 "adrfam": "IPv4", 00:16:27.990 "traddr": "10.0.0.1", 00:16:27.990 "trsvcid": "49930" 00:16:27.990 }, 00:16:27.990 "auth": { 00:16:27.990 "state": "completed", 00:16:27.990 "digest": "sha512", 00:16:27.990 "dhgroup": "ffdhe3072" 00:16:27.990 } 00:16:27.990 } 00:16:27.990 ]' 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.990 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.248 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.248 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.248 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.248 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.248 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.507 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:28.507 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.075 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.076 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.334 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.593 { 00:16:29.593 "cntlid": 121, 00:16:29.593 "qid": 0, 00:16:29.593 "state": "enabled", 00:16:29.593 "thread": "nvmf_tgt_poll_group_000", 00:16:29.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:29.593 "listen_address": { 00:16:29.593 "trtype": "TCP", 00:16:29.593 "adrfam": "IPv4", 00:16:29.593 "traddr": "10.0.0.2", 00:16:29.593 "trsvcid": "4420" 00:16:29.593 }, 00:16:29.593 "peer_address": { 00:16:29.593 "trtype": "TCP", 00:16:29.593 "adrfam": "IPv4", 00:16:29.593 "traddr": "10.0.0.1", 00:16:29.593 "trsvcid": "49968" 00:16:29.593 }, 00:16:29.593 "auth": { 00:16:29.593 "state": "completed", 00:16:29.593 "digest": "sha512", 00:16:29.593 "dhgroup": "ffdhe4096" 00:16:29.593 } 00:16:29.593 } 00:16:29.593 ]' 00:16:29.593 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.853 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.853 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.853 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.853 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.853 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.853 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.853 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.112 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:30.112 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.680 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.939 00:16:30.939 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.939 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.939 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.198 { 00:16:31.198 "cntlid": 123, 00:16:31.198 "qid": 0, 00:16:31.198 "state": "enabled", 00:16:31.198 "thread": "nvmf_tgt_poll_group_000", 00:16:31.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.198 "listen_address": { 00:16:31.198 "trtype": "TCP", 00:16:31.198 "adrfam": "IPv4", 00:16:31.198 "traddr": "10.0.0.2", 00:16:31.198 "trsvcid": "4420" 00:16:31.198 }, 00:16:31.198 "peer_address": { 00:16:31.198 "trtype": "TCP", 00:16:31.198 "adrfam": "IPv4", 00:16:31.198 "traddr": "10.0.0.1", 00:16:31.198 "trsvcid": "49988" 00:16:31.198 }, 00:16:31.198 "auth": { 00:16:31.198 "state": "completed", 00:16:31.198 "digest": "sha512", 00:16:31.198 "dhgroup": "ffdhe4096" 00:16:31.198 } 00:16:31.198 } 00:16:31.198 ]' 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.198 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.456 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.456 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.456 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.456 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.456 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.714 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:31.714 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.281 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.540 00:16:32.540 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.540 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.540 18:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.800 { 00:16:32.800 "cntlid": 125, 00:16:32.800 "qid": 0, 00:16:32.800 "state": "enabled", 00:16:32.800 "thread": "nvmf_tgt_poll_group_000", 00:16:32.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:32.800 "listen_address": { 00:16:32.800 "trtype": "TCP", 00:16:32.800 "adrfam": "IPv4", 00:16:32.800 "traddr": "10.0.0.2", 00:16:32.800 "trsvcid": "4420" 00:16:32.800 }, 00:16:32.800 "peer_address": { 00:16:32.800 "trtype": "TCP", 00:16:32.800 "adrfam": "IPv4", 00:16:32.800 "traddr": "10.0.0.1", 00:16:32.800 "trsvcid": "50024" 00:16:32.800 }, 00:16:32.800 "auth": { 00:16:32.800 "state": "completed", 00:16:32.800 "digest": "sha512", 00:16:32.800 "dhgroup": "ffdhe4096" 00:16:32.800 } 00:16:32.800 } 00:16:32.800 ]' 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.800 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.058 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.058 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.058 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.058 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.058 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.058 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:33.058 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.627 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.886 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.887 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.146 00:16:34.146 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.146 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.146 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.405 { 00:16:34.405 "cntlid": 127, 00:16:34.405 "qid": 0, 00:16:34.405 "state": "enabled", 00:16:34.405 "thread": "nvmf_tgt_poll_group_000", 00:16:34.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.405 "listen_address": { 00:16:34.405 "trtype": "TCP", 00:16:34.405 "adrfam": "IPv4", 00:16:34.405 "traddr": "10.0.0.2", 00:16:34.405 "trsvcid": "4420" 00:16:34.405 }, 00:16:34.405 "peer_address": { 00:16:34.405 "trtype": "TCP", 00:16:34.405 "adrfam": "IPv4", 00:16:34.405 "traddr": "10.0.0.1", 00:16:34.405 "trsvcid": "50054" 00:16:34.405 }, 00:16:34.405 "auth": { 00:16:34.405 "state": "completed", 00:16:34.405 "digest": "sha512", 00:16:34.405 "dhgroup": "ffdhe4096" 00:16:34.405 } 00:16:34.405 } 00:16:34.405 ]' 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.405 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.664 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.664 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.664 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.664 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:34.664 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.231 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.490 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.748 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.006 { 00:16:36.006 "cntlid": 129, 00:16:36.006 "qid": 0, 00:16:36.006 "state": "enabled", 00:16:36.006 "thread": "nvmf_tgt_poll_group_000", 00:16:36.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.006 "listen_address": { 00:16:36.006 "trtype": "TCP", 00:16:36.006 "adrfam": "IPv4", 00:16:36.006 "traddr": "10.0.0.2", 00:16:36.006 "trsvcid": "4420" 00:16:36.006 }, 00:16:36.006 "peer_address": { 00:16:36.006 "trtype": "TCP", 00:16:36.006 "adrfam": "IPv4", 00:16:36.006 "traddr": "10.0.0.1", 00:16:36.006 "trsvcid": "50078" 00:16:36.006 }, 00:16:36.006 "auth": { 00:16:36.006 "state": "completed", 00:16:36.006 "digest": "sha512", 00:16:36.006 "dhgroup": "ffdhe6144" 00:16:36.006 } 00:16:36.006 } 00:16:36.006 ]' 00:16:36.006 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.267 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.267 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.268 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.268 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.268 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.268 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.268 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.564 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:36.564 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.180 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.438 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.696 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.696 { 00:16:37.696 "cntlid": 131, 00:16:37.696 "qid": 0, 00:16:37.696 "state": "enabled", 00:16:37.696 "thread": "nvmf_tgt_poll_group_000", 00:16:37.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.696 "listen_address": { 00:16:37.696 "trtype": "TCP", 00:16:37.696 "adrfam": "IPv4", 00:16:37.696 "traddr": "10.0.0.2", 00:16:37.696 "trsvcid": "4420" 00:16:37.696 }, 00:16:37.696 "peer_address": { 00:16:37.696 "trtype": "TCP", 00:16:37.696 "adrfam": "IPv4", 00:16:37.696 "traddr": "10.0.0.1", 00:16:37.696 "trsvcid": "50102" 00:16:37.696 }, 00:16:37.696 "auth": { 00:16:37.696 "state": "completed", 00:16:37.696 "digest": "sha512", 00:16:37.697 "dhgroup": "ffdhe6144" 00:16:37.697 } 00:16:37.697 } 00:16:37.697 ]' 00:16:37.697 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.697 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.697 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.955 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.955 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.955 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.955 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.955 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.213 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:38.213 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.780 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.039 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.040 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.040 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.040 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.298 00:16:39.298 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.298 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.299 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.557 { 00:16:39.557 "cntlid": 133, 00:16:39.557 "qid": 0, 00:16:39.557 "state": "enabled", 00:16:39.557 "thread": "nvmf_tgt_poll_group_000", 00:16:39.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.557 "listen_address": { 00:16:39.557 "trtype": "TCP", 00:16:39.557 "adrfam": "IPv4", 00:16:39.557 "traddr": "10.0.0.2", 00:16:39.557 "trsvcid": "4420" 00:16:39.557 }, 00:16:39.557 "peer_address": { 00:16:39.557 "trtype": "TCP", 00:16:39.557 "adrfam": "IPv4", 00:16:39.557 "traddr": "10.0.0.1", 00:16:39.557 "trsvcid": "40580" 00:16:39.557 }, 00:16:39.557 "auth": { 00:16:39.557 "state": "completed", 00:16:39.557 "digest": "sha512", 00:16:39.557 "dhgroup": "ffdhe6144" 00:16:39.557 } 00:16:39.557 } 00:16:39.557 ]' 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.557 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.558 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.816 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:39.816 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.383 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.642 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.643 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.643 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.901 00:16:40.901 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.901 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.901 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.160 { 00:16:41.160 "cntlid": 135, 00:16:41.160 "qid": 0, 00:16:41.160 "state": "enabled", 00:16:41.160 "thread": "nvmf_tgt_poll_group_000", 00:16:41.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.160 "listen_address": { 00:16:41.160 "trtype": "TCP", 00:16:41.160 "adrfam": "IPv4", 00:16:41.160 "traddr": "10.0.0.2", 00:16:41.160 "trsvcid": "4420" 00:16:41.160 }, 00:16:41.160 "peer_address": { 00:16:41.160 "trtype": "TCP", 00:16:41.160 "adrfam": "IPv4", 00:16:41.160 "traddr": "10.0.0.1", 00:16:41.160 "trsvcid": "40618" 00:16:41.160 }, 00:16:41.160 "auth": { 00:16:41.160 "state": "completed", 00:16:41.160 "digest": "sha512", 00:16:41.160 "dhgroup": "ffdhe6144" 00:16:41.160 } 00:16:41.160 } 00:16:41.160 ]' 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.160 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.420 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:41.420 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.987 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.246 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:42.246 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.247 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.814 00:16:42.814 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.814 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.814 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.814 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.814 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.814 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.814 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.814 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.814 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.814 { 00:16:42.814 "cntlid": 137, 00:16:42.814 "qid": 0, 00:16:42.814 "state": "enabled", 00:16:42.814 "thread": "nvmf_tgt_poll_group_000", 00:16:42.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.814 "listen_address": { 00:16:42.814 "trtype": "TCP", 00:16:42.814 "adrfam": "IPv4", 00:16:42.814 "traddr": "10.0.0.2", 00:16:42.814 "trsvcid": "4420" 00:16:42.814 }, 00:16:42.814 "peer_address": { 00:16:42.814 "trtype": "TCP", 00:16:42.814 "adrfam": "IPv4", 00:16:42.814 "traddr": "10.0.0.1", 00:16:42.814 "trsvcid": "40658" 00:16:42.814 }, 00:16:42.814 "auth": { 00:16:42.814 "state": "completed", 00:16:42.814 "digest": "sha512", 00:16:42.814 "dhgroup": "ffdhe8192" 00:16:42.814 } 00:16:42.814 } 00:16:42.814 ]' 00:16:42.814 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.073 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.073 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.073 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.073 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.073 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.073 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.073 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.332 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:43.332 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.899 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.159 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.418 00:16:44.418 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.418 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.418 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.677 { 00:16:44.677 "cntlid": 139, 00:16:44.677 "qid": 0, 00:16:44.677 "state": "enabled", 00:16:44.677 "thread": "nvmf_tgt_poll_group_000", 00:16:44.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:44.677 "listen_address": { 00:16:44.677 "trtype": "TCP", 00:16:44.677 "adrfam": "IPv4", 00:16:44.677 "traddr": "10.0.0.2", 00:16:44.677 "trsvcid": "4420" 00:16:44.677 }, 00:16:44.677 "peer_address": { 00:16:44.677 "trtype": "TCP", 00:16:44.677 "adrfam": "IPv4", 00:16:44.677 "traddr": "10.0.0.1", 00:16:44.677 "trsvcid": "40688" 00:16:44.677 }, 00:16:44.677 "auth": { 00:16:44.677 "state": "completed", 00:16:44.677 "digest": "sha512", 00:16:44.677 "dhgroup": "ffdhe8192" 00:16:44.677 } 00:16:44.677 } 00:16:44.677 ]' 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.677 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.677 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.936 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.936 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.936 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.936 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.194 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:45.194 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: --dhchap-ctrl-secret DHHC-1:02:ZjYwNTE4YTE4MjM4NTFlOGVjZDk2ODdiODJiYjg0M2QxNTI2OTY1ZTUwYzM1ZWE2EV/+Mg==: 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.761 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.761 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.329 00:16:46.329 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.329 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.329 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.588 { 00:16:46.588 "cntlid": 141, 00:16:46.588 "qid": 0, 00:16:46.588 "state": "enabled", 00:16:46.588 "thread": "nvmf_tgt_poll_group_000", 00:16:46.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:46.588 "listen_address": { 00:16:46.588 "trtype": "TCP", 00:16:46.588 "adrfam": "IPv4", 00:16:46.588 "traddr": "10.0.0.2", 00:16:46.588 "trsvcid": "4420" 00:16:46.588 }, 00:16:46.588 "peer_address": { 00:16:46.588 "trtype": "TCP", 00:16:46.588 "adrfam": "IPv4", 00:16:46.588 "traddr": "10.0.0.1", 00:16:46.588 "trsvcid": "40720" 00:16:46.588 }, 00:16:46.588 "auth": { 00:16:46.588 "state": "completed", 00:16:46.588 "digest": "sha512", 00:16:46.588 "dhgroup": "ffdhe8192" 00:16:46.588 } 00:16:46.588 } 00:16:46.588 ]' 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.588 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.847 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:46.847 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:01:MDQyNzQwZTg4OTAyYzY3Zjc1MmE2ZmRlMDQ5ODIyMjEWgd7d: 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.413 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.673 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.241 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.241 { 00:16:48.241 "cntlid": 143, 00:16:48.241 "qid": 0, 00:16:48.241 "state": "enabled", 00:16:48.241 "thread": "nvmf_tgt_poll_group_000", 00:16:48.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.241 "listen_address": { 00:16:48.241 "trtype": "TCP", 00:16:48.241 "adrfam": "IPv4", 00:16:48.241 "traddr": "10.0.0.2", 00:16:48.241 "trsvcid": "4420" 00:16:48.241 }, 00:16:48.241 "peer_address": { 00:16:48.241 "trtype": "TCP", 00:16:48.241 "adrfam": "IPv4", 00:16:48.241 "traddr": "10.0.0.1", 00:16:48.241 "trsvcid": "34822" 00:16:48.241 }, 00:16:48.241 "auth": { 00:16:48.241 "state": "completed", 00:16:48.241 "digest": "sha512", 00:16:48.241 "dhgroup": "ffdhe8192" 00:16:48.241 } 00:16:48.241 } 00:16:48.241 ]' 00:16:48.241 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.500 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.500 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.500 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.500 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.500 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.500 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.500 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.759 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:48.759 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:49.327 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.327 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.328 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.896 00:16:49.896 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.896 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.896 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.155 { 00:16:50.155 "cntlid": 145, 00:16:50.155 "qid": 0, 00:16:50.155 "state": "enabled", 00:16:50.155 "thread": "nvmf_tgt_poll_group_000", 00:16:50.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.155 "listen_address": { 00:16:50.155 "trtype": "TCP", 00:16:50.155 "adrfam": "IPv4", 00:16:50.155 "traddr": "10.0.0.2", 00:16:50.155 "trsvcid": "4420" 00:16:50.155 }, 00:16:50.155 "peer_address": { 00:16:50.155 "trtype": "TCP", 00:16:50.155 "adrfam": "IPv4", 00:16:50.155 "traddr": "10.0.0.1", 00:16:50.155 "trsvcid": "34848" 00:16:50.155 }, 00:16:50.155 "auth": { 00:16:50.155 "state": "completed", 00:16:50.155 "digest": "sha512", 00:16:50.155 "dhgroup": "ffdhe8192" 00:16:50.155 } 00:16:50.155 } 00:16:50.155 ]' 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.155 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.414 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:50.414 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGIwOWU0ODRmMmQwZWVjZWYzOTg1ODEzYjUxMTdhNGY4NzgyYzFmZGI4Y2FkYzBlMmKFKA==: --dhchap-ctrl-secret DHHC-1:03:MmY0MWRjYTk3ZDUzODE3Njg3MWM3ZTVkNGU3OTQ4OTM4YTE3OTYxMjI3Nzg5OTk1MjU4NmQwZWIyY2ExNTA0MEEc9jI=: 00:16:50.983 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:50.984 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:51.553 request: 00:16:51.553 { 00:16:51.553 "name": "nvme0", 00:16:51.553 "trtype": "tcp", 00:16:51.553 "traddr": "10.0.0.2", 00:16:51.553 "adrfam": "ipv4", 00:16:51.553 "trsvcid": "4420", 00:16:51.553 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:51.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.553 "prchk_reftag": false, 00:16:51.553 "prchk_guard": false, 00:16:51.553 "hdgst": false, 00:16:51.553 "ddgst": false, 00:16:51.553 "dhchap_key": "key2", 00:16:51.553 "allow_unrecognized_csi": false, 00:16:51.553 "method": "bdev_nvme_attach_controller", 00:16:51.553 "req_id": 1 00:16:51.553 } 00:16:51.553 Got JSON-RPC error response 00:16:51.553 response: 00:16:51.553 { 00:16:51.553 "code": -5, 00:16:51.553 "message": "Input/output error" 00:16:51.553 } 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:51.553 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:52.121 request: 00:16:52.121 { 00:16:52.121 "name": "nvme0", 00:16:52.121 "trtype": "tcp", 00:16:52.121 "traddr": "10.0.0.2", 00:16:52.121 "adrfam": "ipv4", 00:16:52.121 "trsvcid": "4420", 00:16:52.121 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.121 "prchk_reftag": false, 00:16:52.121 "prchk_guard": false, 00:16:52.121 "hdgst": false, 00:16:52.121 "ddgst": false, 00:16:52.121 "dhchap_key": "key1", 00:16:52.121 "dhchap_ctrlr_key": "ckey2", 00:16:52.121 "allow_unrecognized_csi": false, 00:16:52.121 "method": "bdev_nvme_attach_controller", 00:16:52.121 "req_id": 1 00:16:52.121 } 00:16:52.121 Got JSON-RPC error response 00:16:52.121 response: 00:16:52.121 { 00:16:52.121 "code": -5, 00:16:52.121 "message": "Input/output error" 00:16:52.121 } 00:16:52.121 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:52.121 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.121 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.121 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.121 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.122 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.380 request: 00:16:52.380 { 00:16:52.380 "name": "nvme0", 00:16:52.380 "trtype": "tcp", 00:16:52.380 "traddr": "10.0.0.2", 00:16:52.380 "adrfam": "ipv4", 00:16:52.380 "trsvcid": "4420", 00:16:52.380 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.380 "prchk_reftag": false, 00:16:52.380 "prchk_guard": false, 00:16:52.380 "hdgst": false, 00:16:52.380 "ddgst": false, 00:16:52.380 "dhchap_key": "key1", 00:16:52.380 "dhchap_ctrlr_key": "ckey1", 00:16:52.380 "allow_unrecognized_csi": false, 00:16:52.380 "method": "bdev_nvme_attach_controller", 00:16:52.380 "req_id": 1 00:16:52.380 } 00:16:52.380 Got JSON-RPC error response 00:16:52.380 response: 00:16:52.380 { 00:16:52.380 "code": -5, 00:16:52.380 "message": "Input/output error" 00:16:52.380 } 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3622978 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3622978 ']' 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3622978 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.380 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3622978 00:16:52.639 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3622978' 00:16:52.640 killing process with pid 3622978 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3622978 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3622978 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3645001 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3645001 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3645001 ']' 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.640 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3645001 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3645001 ']' 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.899 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 null0 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pST 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0j4 ]] 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0j4 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Mzt 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.eHE ]] 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eHE 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.63q 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.159 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.cfB ]] 00:16:53.159 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cfB 00:16:53.159 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.159 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rhG 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.417 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.418 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.985 nvme0n1 00:16:53.985 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.985 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.985 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.244 { 00:16:54.244 "cntlid": 1, 00:16:54.244 "qid": 0, 00:16:54.244 "state": "enabled", 00:16:54.244 "thread": "nvmf_tgt_poll_group_000", 00:16:54.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.244 "listen_address": { 00:16:54.244 "trtype": "TCP", 00:16:54.244 "adrfam": "IPv4", 00:16:54.244 "traddr": "10.0.0.2", 00:16:54.244 "trsvcid": "4420" 00:16:54.244 }, 00:16:54.244 "peer_address": { 00:16:54.244 "trtype": "TCP", 00:16:54.244 "adrfam": "IPv4", 00:16:54.244 "traddr": "10.0.0.1", 00:16:54.244 "trsvcid": "34898" 00:16:54.244 }, 00:16:54.244 "auth": { 00:16:54.244 "state": "completed", 00:16:54.244 "digest": "sha512", 00:16:54.244 "dhgroup": "ffdhe8192" 00:16:54.244 } 00:16:54.244 } 00:16:54.244 ]' 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.244 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.245 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.245 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.245 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.503 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:54.503 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.068 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.326 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.585 request: 00:16:55.585 { 00:16:55.585 "name": "nvme0", 00:16:55.585 "trtype": "tcp", 00:16:55.585 "traddr": "10.0.0.2", 00:16:55.585 "adrfam": "ipv4", 00:16:55.585 "trsvcid": "4420", 00:16:55.585 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.585 "prchk_reftag": false, 00:16:55.585 "prchk_guard": false, 00:16:55.585 "hdgst": false, 00:16:55.585 "ddgst": false, 00:16:55.585 "dhchap_key": "key3", 00:16:55.585 "allow_unrecognized_csi": false, 00:16:55.585 "method": "bdev_nvme_attach_controller", 00:16:55.585 "req_id": 1 00:16:55.585 } 00:16:55.585 Got JSON-RPC error response 00:16:55.585 response: 00:16:55.585 { 00:16:55.585 "code": -5, 00:16:55.585 "message": "Input/output error" 00:16:55.585 } 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:55.585 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:55.844 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:55.844 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:55.844 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:55.844 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:55.844 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.844 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:55.844 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.844 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.844 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.844 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.103 request: 00:16:56.103 { 00:16:56.103 "name": "nvme0", 00:16:56.103 "trtype": "tcp", 00:16:56.103 "traddr": "10.0.0.2", 00:16:56.103 "adrfam": "ipv4", 00:16:56.103 "trsvcid": "4420", 00:16:56.103 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.103 "prchk_reftag": false, 00:16:56.103 "prchk_guard": false, 00:16:56.103 "hdgst": false, 00:16:56.103 "ddgst": false, 00:16:56.103 "dhchap_key": "key3", 00:16:56.103 "allow_unrecognized_csi": false, 00:16:56.103 "method": "bdev_nvme_attach_controller", 00:16:56.103 "req_id": 1 00:16:56.103 } 00:16:56.103 Got JSON-RPC error response 00:16:56.103 response: 00:16:56.103 { 00:16:56.103 "code": -5, 00:16:56.103 "message": "Input/output error" 00:16:56.103 } 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.103 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.362 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:56.621 request: 00:16:56.621 { 00:16:56.621 "name": "nvme0", 00:16:56.621 "trtype": "tcp", 00:16:56.621 "traddr": "10.0.0.2", 00:16:56.621 "adrfam": "ipv4", 00:16:56.621 "trsvcid": "4420", 00:16:56.621 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.621 "prchk_reftag": false, 00:16:56.621 "prchk_guard": false, 00:16:56.621 "hdgst": false, 00:16:56.621 "ddgst": false, 00:16:56.621 "dhchap_key": "key0", 00:16:56.621 "dhchap_ctrlr_key": "key1", 00:16:56.621 "allow_unrecognized_csi": false, 00:16:56.621 "method": "bdev_nvme_attach_controller", 00:16:56.621 "req_id": 1 00:16:56.621 } 00:16:56.621 Got JSON-RPC error response 00:16:56.621 response: 00:16:56.621 { 00:16:56.621 "code": -5, 00:16:56.621 "message": "Input/output error" 00:16:56.621 } 00:16:56.621 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:56.621 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.621 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.621 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.621 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:56.621 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:56.622 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:56.881 nvme0n1 00:16:56.881 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:56.881 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:56.881 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:57.139 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:58.076 nvme0n1 00:16:58.076 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:58.076 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:58.076 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.076 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.076 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:58.076 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.076 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.334 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.335 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:58.335 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.335 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:58.335 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.335 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:58.335 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: --dhchap-ctrl-secret DHHC-1:03:OGE5MmIyMjQ2NjRmMDBhZWM1ZmNhYTEwZjY3NzRmNzUzNGM5NTNkYzI2YjQyYWM4NDhkOTE3ZjFhMDNkMjdiMZmo7Ls=: 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.901 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:59.160 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:59.727 request: 00:16:59.727 { 00:16:59.727 "name": "nvme0", 00:16:59.727 "trtype": "tcp", 00:16:59.727 "traddr": "10.0.0.2", 00:16:59.727 "adrfam": "ipv4", 00:16:59.727 "trsvcid": "4420", 00:16:59.727 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:59.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.727 "prchk_reftag": false, 00:16:59.727 "prchk_guard": false, 00:16:59.727 "hdgst": false, 00:16:59.727 "ddgst": false, 00:16:59.727 "dhchap_key": "key1", 00:16:59.727 "allow_unrecognized_csi": false, 00:16:59.727 "method": "bdev_nvme_attach_controller", 00:16:59.727 "req_id": 1 00:16:59.727 } 00:16:59.727 Got JSON-RPC error response 00:16:59.727 response: 00:16:59.727 { 00:16:59.727 "code": -5, 00:16:59.728 "message": "Input/output error" 00:16:59.728 } 00:16:59.728 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:59.728 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.728 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.728 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.728 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:59.728 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:59.728 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:00.296 nvme0n1 00:17:00.296 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:00.296 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.296 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:00.554 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.554 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.554 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.812 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.812 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.812 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.812 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.812 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:00.813 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:00.813 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:01.071 nvme0n1 00:17:01.071 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:01.071 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:01.071 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: '' 2s 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: ]] 00:17:01.330 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGMxY2RmMDE0OWUxZWU5ZTk3MzYwZDgxZDg2OWMxNWFcip/8: 00:17:01.588 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:01.588 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:01.588 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: 2s 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: ]] 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDVkNzQ2YTMyNzA0Y2Y4ZDgwMTk3ZjZjMGM1ODFkZGFkNDZiOGI3YmFiNTU3NGRha0VaJw==: 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:03.491 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:05.394 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:05.394 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:05.394 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:05.394 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:05.654 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:06.222 nvme0n1 00:17:06.222 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:06.222 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.222 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.222 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.222 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:06.222 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:06.790 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:06.790 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:06.790 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.048 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.048 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.048 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.048 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.048 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.048 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:07.048 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:07.307 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:07.875 request: 00:17:07.875 { 00:17:07.875 "name": "nvme0", 00:17:07.875 "dhchap_key": "key1", 00:17:07.875 "dhchap_ctrlr_key": "key3", 00:17:07.875 "method": "bdev_nvme_set_keys", 00:17:07.875 "req_id": 1 00:17:07.875 } 00:17:07.875 Got JSON-RPC error response 00:17:07.875 response: 00:17:07.875 { 00:17:07.875 "code": -13, 00:17:07.875 "message": "Permission denied" 00:17:07.875 } 00:17:07.876 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:07.876 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.876 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.876 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.876 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:07.876 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:07.876 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.135 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:08.135 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:09.070 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:09.070 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:09.070 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:09.329 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:09.896 nvme0n1 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:09.896 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:10.155 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.155 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:10.155 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.155 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:10.155 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:10.413 request: 00:17:10.413 { 00:17:10.413 "name": "nvme0", 00:17:10.413 "dhchap_key": "key2", 00:17:10.413 "dhchap_ctrlr_key": "key0", 00:17:10.413 "method": "bdev_nvme_set_keys", 00:17:10.413 "req_id": 1 00:17:10.413 } 00:17:10.413 Got JSON-RPC error response 00:17:10.413 response: 00:17:10.413 { 00:17:10.413 "code": -13, 00:17:10.413 "message": "Permission denied" 00:17:10.413 } 00:17:10.413 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.413 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.413 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.413 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.413 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:10.413 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:10.413 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.672 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:10.672 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:11.607 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:11.607 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:11.607 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3623004 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3623004 ']' 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3623004 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3623004 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3623004' 00:17:11.866 killing process with pid 3623004 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3623004 00:17:11.866 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3623004 00:17:12.125 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:12.125 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:12.125 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:12.125 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:12.125 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:12.125 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.125 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:12.384 rmmod nvme_tcp 00:17:12.384 rmmod nvme_fabrics 00:17:12.384 rmmod nvme_keyring 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3645001 ']' 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3645001 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3645001 ']' 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3645001 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3645001 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3645001' 00:17:12.384 killing process with pid 3645001 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3645001 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3645001 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:12.384 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.644 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pST /tmp/spdk.key-sha256.Mzt /tmp/spdk.key-sha384.63q /tmp/spdk.key-sha512.rhG /tmp/spdk.key-sha512.0j4 /tmp/spdk.key-sha384.eHE /tmp/spdk.key-sha256.cfB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:14.550 00:17:14.550 real 2m31.817s 00:17:14.550 user 5m49.281s 00:17:14.550 sys 0m24.635s 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.550 ************************************ 00:17:14.550 END TEST nvmf_auth_target 00:17:14.550 ************************************ 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.550 ************************************ 00:17:14.550 START TEST nvmf_bdevio_no_huge 00:17:14.550 ************************************ 00:17:14.550 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:14.810 * Looking for test storage... 00:17:14.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.810 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:14.810 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:14.810 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:14.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.810 --rc genhtml_branch_coverage=1 00:17:14.810 --rc genhtml_function_coverage=1 00:17:14.810 --rc genhtml_legend=1 00:17:14.810 --rc geninfo_all_blocks=1 00:17:14.810 --rc geninfo_unexecuted_blocks=1 00:17:14.810 00:17:14.810 ' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:14.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.810 --rc genhtml_branch_coverage=1 00:17:14.810 --rc genhtml_function_coverage=1 00:17:14.810 --rc genhtml_legend=1 00:17:14.810 --rc geninfo_all_blocks=1 00:17:14.810 --rc geninfo_unexecuted_blocks=1 00:17:14.810 00:17:14.810 ' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:14.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.810 --rc genhtml_branch_coverage=1 00:17:14.810 --rc genhtml_function_coverage=1 00:17:14.810 --rc genhtml_legend=1 00:17:14.810 --rc geninfo_all_blocks=1 00:17:14.810 --rc geninfo_unexecuted_blocks=1 00:17:14.810 00:17:14.810 ' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:14.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.810 --rc genhtml_branch_coverage=1 00:17:14.810 --rc genhtml_function_coverage=1 00:17:14.810 --rc genhtml_legend=1 00:17:14.810 --rc geninfo_all_blocks=1 00:17:14.810 --rc geninfo_unexecuted_blocks=1 00:17:14.810 00:17:14.810 ' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.810 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:14.811 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.459 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.459 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:21.460 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:21.460 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:21.460 Found net devices under 0000:86:00.0: cvl_0_0 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:21.460 Found net devices under 0000:86:00.1: cvl_0_1 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.460 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.461 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.461 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:21.461 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:21.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:17:21.461 00:17:21.461 --- 10.0.0.2 ping statistics --- 00:17:21.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.461 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:17:21.461 00:17:21.461 --- 10.0.0.1 ping statistics --- 00:17:21.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.461 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3651894 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3651894 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3651894 ']' 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.461 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.461 [2024-11-20 18:54:43.145257] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:21.461 [2024-11-20 18:54:43.145303] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:21.461 [2024-11-20 18:54:43.231534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.461 [2024-11-20 18:54:43.278019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.461 [2024-11-20 18:54:43.278054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.461 [2024-11-20 18:54:43.278061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.461 [2024-11-20 18:54:43.278066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.461 [2024-11-20 18:54:43.278072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.461 [2024-11-20 18:54:43.279155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:21.461 [2024-11-20 18:54:43.279266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:21.461 [2024-11-20 18:54:43.279373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.461 [2024-11-20 18:54:43.279373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:21.719 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.719 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:21.719 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.719 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.719 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.719 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.719 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.719 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.719 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.719 [2024-11-20 18:54:44.036403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 Malloc0 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 [2024-11-20 18:54:44.080680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:21.977 { 00:17:21.977 "params": { 00:17:21.977 "name": "Nvme$subsystem", 00:17:21.977 "trtype": "$TEST_TRANSPORT", 00:17:21.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.977 "adrfam": "ipv4", 00:17:21.977 "trsvcid": "$NVMF_PORT", 00:17:21.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.977 "hdgst": ${hdgst:-false}, 00:17:21.977 "ddgst": ${ddgst:-false} 00:17:21.977 }, 00:17:21.977 "method": "bdev_nvme_attach_controller" 00:17:21.977 } 00:17:21.977 EOF 00:17:21.977 )") 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:21.977 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:21.977 "params": { 00:17:21.977 "name": "Nvme1", 00:17:21.977 "trtype": "tcp", 00:17:21.977 "traddr": "10.0.0.2", 00:17:21.977 "adrfam": "ipv4", 00:17:21.977 "trsvcid": "4420", 00:17:21.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.977 "hdgst": false, 00:17:21.977 "ddgst": false 00:17:21.977 }, 00:17:21.977 "method": "bdev_nvme_attach_controller" 00:17:21.977 }' 00:17:21.977 [2024-11-20 18:54:44.132368] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:21.977 [2024-11-20 18:54:44.132417] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3652134 ] 00:17:21.977 [2024-11-20 18:54:44.211928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.977 [2024-11-20 18:54:44.260339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.977 [2024-11-20 18:54:44.260444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.977 [2024-11-20 18:54:44.260445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.543 I/O targets: 00:17:22.543 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:22.543 00:17:22.543 00:17:22.543 CUnit - A unit testing framework for C - Version 2.1-3 00:17:22.543 http://cunit.sourceforge.net/ 00:17:22.543 00:17:22.543 00:17:22.543 Suite: bdevio tests on: Nvme1n1 00:17:22.543 Test: blockdev write read block ...passed 00:17:22.543 Test: blockdev write zeroes read block ...passed 00:17:22.543 Test: blockdev write zeroes read no split ...passed 00:17:22.543 Test: blockdev write zeroes read split ...passed 00:17:22.543 Test: blockdev write zeroes read split partial ...passed 00:17:22.543 Test: blockdev reset ...[2024-11-20 18:54:44.756805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:22.543 [2024-11-20 18:54:44.756869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108b920 (9): Bad file descriptor 00:17:22.543 [2024-11-20 18:54:44.819110] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:22.543 passed 00:17:22.543 Test: blockdev write read 8 blocks ...passed 00:17:22.543 Test: blockdev write read size > 128k ...passed 00:17:22.543 Test: blockdev write read invalid size ...passed 00:17:22.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:22.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:22.543 Test: blockdev write read max offset ...passed 00:17:22.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:22.801 Test: blockdev writev readv 8 blocks ...passed 00:17:22.801 Test: blockdev writev readv 30 x 1block ...passed 00:17:22.801 Test: blockdev writev readv block ...passed 00:17:22.801 Test: blockdev writev readv size > 128k ...passed 00:17:22.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:22.801 Test: blockdev comparev and writev ...[2024-11-20 18:54:45.070018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.801 [2024-11-20 18:54:45.070061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:22.801 [2024-11-20 18:54:45.070330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:22.801 [2024-11-20 18:54:45.070353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:22.801 [2024-11-20 18:54:45.070606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:22.801 [2024-11-20 18:54:45.070628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:22.801 [2024-11-20 18:54:45.070878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:22.801 [2024-11-20 18:54:45.070901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.801 [2024-11-20 18:54:45.070908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:22.801 passed 00:17:23.060 Test: blockdev nvme passthru rw ...passed 00:17:23.060 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:54:45.152564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.060 [2024-11-20 18:54:45.152588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:23.060 [2024-11-20 18:54:45.152703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.060 [2024-11-20 18:54:45.152714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:23.060 [2024-11-20 18:54:45.152820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.060 [2024-11-20 18:54:45.152829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:23.060 [2024-11-20 18:54:45.152932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.060 [2024-11-20 18:54:45.152941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:23.060 passed 00:17:23.060 Test: blockdev nvme admin passthru ...passed 00:17:23.060 Test: blockdev copy ...passed 00:17:23.060 00:17:23.060 Run Summary: Type Total Ran Passed Failed Inactive 00:17:23.060 suites 1 1 n/a 0 0 00:17:23.060 tests 23 23 23 0 0 00:17:23.060 asserts 152 152 152 0 n/a 00:17:23.060 00:17:23.060 Elapsed time = 1.248 seconds 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.318 rmmod nvme_tcp 00:17:23.318 rmmod nvme_fabrics 00:17:23.318 rmmod nvme_keyring 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.318 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3651894 ']' 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3651894 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3651894 ']' 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3651894 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3651894 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3651894' 00:17:23.319 killing process with pid 3651894 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3651894 00:17:23.319 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3651894 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.884 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.786 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.786 00:17:25.786 real 0m11.135s 00:17:25.786 user 0m14.735s 00:17:25.786 sys 0m5.503s 00:17:25.786 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.786 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:25.786 ************************************ 00:17:25.786 END TEST nvmf_bdevio_no_huge 00:17:25.786 ************************************ 00:17:25.786 18:54:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.786 18:54:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:25.786 18:54:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.786 18:54:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.786 ************************************ 00:17:25.786 START TEST nvmf_tls 00:17:25.786 ************************************ 00:17:25.786 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:26.045 * Looking for test storage... 00:17:26.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:26.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.045 --rc genhtml_branch_coverage=1 00:17:26.045 --rc genhtml_function_coverage=1 00:17:26.045 --rc genhtml_legend=1 00:17:26.045 --rc geninfo_all_blocks=1 00:17:26.045 --rc geninfo_unexecuted_blocks=1 00:17:26.045 00:17:26.045 ' 00:17:26.045 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:26.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.045 --rc genhtml_branch_coverage=1 00:17:26.046 --rc genhtml_function_coverage=1 00:17:26.046 --rc genhtml_legend=1 00:17:26.046 --rc geninfo_all_blocks=1 00:17:26.046 --rc geninfo_unexecuted_blocks=1 00:17:26.046 00:17:26.046 ' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:26.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.046 --rc genhtml_branch_coverage=1 00:17:26.046 --rc genhtml_function_coverage=1 00:17:26.046 --rc genhtml_legend=1 00:17:26.046 --rc geninfo_all_blocks=1 00:17:26.046 --rc geninfo_unexecuted_blocks=1 00:17:26.046 00:17:26.046 ' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:26.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.046 --rc genhtml_branch_coverage=1 00:17:26.046 --rc genhtml_function_coverage=1 00:17:26.046 --rc genhtml_legend=1 00:17:26.046 --rc geninfo_all_blocks=1 00:17:26.046 --rc geninfo_unexecuted_blocks=1 00:17:26.046 00:17:26.046 ' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:26.046 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:32.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:32.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:32.615 Found net devices under 0000:86:00.0: cvl_0_0 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:32.615 Found net devices under 0000:86:00.1: cvl_0_1 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.615 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.615 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.615 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:17:32.616 00:17:32.616 --- 10.0.0.2 ping statistics --- 00:17:32.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.616 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:17:32.616 00:17:32.616 --- 10.0.0.1 ping statistics --- 00:17:32.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.616 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3655905 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3655905 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3655905 ']' 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.616 [2024-11-20 18:54:54.331965] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:32.616 [2024-11-20 18:54:54.332008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.616 [2024-11-20 18:54:54.394384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.616 [2024-11-20 18:54:54.438329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.616 [2024-11-20 18:54:54.438364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.616 [2024-11-20 18:54:54.438373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.616 [2024-11-20 18:54:54.438380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.616 [2024-11-20 18:54:54.438386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.616 [2024-11-20 18:54:54.438942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:32.616 true 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:32.616 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:32.875 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.875 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:33.133 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:33.133 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:33.133 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:33.392 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:33.392 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:33.392 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:33.392 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:33.392 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:33.392 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:33.651 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:33.651 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:33.651 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:33.910 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:33.910 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:33.910 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:33.910 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:33.910 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:34.169 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:34.169 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:34.428 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.cAAKj9WAS0 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gS2czxIAT8 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cAAKj9WAS0 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gS2czxIAT8 00:17:34.429 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:34.688 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:34.947 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.cAAKj9WAS0 00:17:34.947 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cAAKj9WAS0 00:17:34.947 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.947 [2024-11-20 18:54:57.265788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.206 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:35.206 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:35.465 [2024-11-20 18:54:57.610652] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.465 [2024-11-20 18:54:57.610879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.465 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:35.724 malloc0 00:17:35.724 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.724 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cAAKj9WAS0 00:17:35.983 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:36.243 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cAAKj9WAS0 00:17:46.221 Initializing NVMe Controllers 00:17:46.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.221 Initialization complete. Launching workers. 00:17:46.221 ======================================================== 00:17:46.221 Latency(us) 00:17:46.221 Device Information : IOPS MiB/s Average min max 00:17:46.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16761.37 65.47 3818.40 813.52 6142.09 00:17:46.221 ======================================================== 00:17:46.221 Total : 16761.37 65.47 3818.40 813.52 6142.09 00:17:46.221 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cAAKj9WAS0 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cAAKj9WAS0 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3658257 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3658257 /var/tmp/bdevperf.sock 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3658257 ']' 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.221 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.221 [2024-11-20 18:55:08.533080] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:46.221 [2024-11-20 18:55:08.533125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658257 ] 00:17:46.480 [2024-11-20 18:55:08.608148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.480 [2024-11-20 18:55:08.649253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.480 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.480 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:46.480 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cAAKj9WAS0 00:17:46.738 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:46.996 [2024-11-20 18:55:09.101725] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.996 TLSTESTn1 00:17:46.996 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.996 Running I/O for 10 seconds... 00:17:49.303 5338.00 IOPS, 20.85 MiB/s [2024-11-20T17:55:12.563Z] 5402.50 IOPS, 21.10 MiB/s [2024-11-20T17:55:13.500Z] 5292.33 IOPS, 20.67 MiB/s [2024-11-20T17:55:14.436Z] 5171.00 IOPS, 20.20 MiB/s [2024-11-20T17:55:15.372Z] 5150.80 IOPS, 20.12 MiB/s [2024-11-20T17:55:16.749Z] 5120.50 IOPS, 20.00 MiB/s [2024-11-20T17:55:17.318Z] 5086.43 IOPS, 19.87 MiB/s [2024-11-20T17:55:18.695Z] 5040.75 IOPS, 19.69 MiB/s [2024-11-20T17:55:19.633Z] 5014.11 IOPS, 19.59 MiB/s [2024-11-20T17:55:19.633Z] 5018.70 IOPS, 19.60 MiB/s 00:17:57.308 Latency(us) 00:17:57.308 [2024-11-20T17:55:19.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.308 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:57.308 Verification LBA range: start 0x0 length 0x2000 00:17:57.308 TLSTESTn1 : 10.02 5022.99 19.62 0.00 0.00 25446.84 4681.14 35202.19 00:17:57.308 [2024-11-20T17:55:19.633Z] =================================================================================================================== 00:17:57.308 [2024-11-20T17:55:19.633Z] Total : 5022.99 19.62 0.00 0.00 25446.84 4681.14 35202.19 00:17:57.308 { 00:17:57.308 "results": [ 00:17:57.308 { 00:17:57.308 "job": "TLSTESTn1", 00:17:57.308 "core_mask": "0x4", 00:17:57.308 "workload": "verify", 00:17:57.308 "status": "finished", 00:17:57.308 "verify_range": { 00:17:57.308 "start": 0, 00:17:57.308 "length": 8192 00:17:57.308 }, 00:17:57.308 "queue_depth": 128, 00:17:57.308 "io_size": 4096, 00:17:57.308 "runtime": 10.016748, 00:17:57.308 "iops": 5022.9875005341055, 00:17:57.308 "mibps": 19.62104492396135, 00:17:57.308 "io_failed": 0, 00:17:57.308 "io_timeout": 0, 00:17:57.308 "avg_latency_us": 25446.83578880819, 00:17:57.308 "min_latency_us": 4681.142857142857, 00:17:57.308 "max_latency_us": 35202.194285714286 00:17:57.308 } 00:17:57.308 ], 00:17:57.308 "core_count": 1 00:17:57.308 } 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3658257 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3658257 ']' 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3658257 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3658257 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3658257' 00:17:57.308 killing process with pid 3658257 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3658257 00:17:57.308 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.308 00:17:57.308 Latency(us) 00:17:57.308 [2024-11-20T17:55:19.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.308 [2024-11-20T17:55:19.633Z] =================================================================================================================== 00:17:57.308 [2024-11-20T17:55:19.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3658257 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gS2czxIAT8 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gS2czxIAT8 00:17:57.308 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gS2czxIAT8 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gS2czxIAT8 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3660085 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3660085 /var/tmp/bdevperf.sock 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3660085 ']' 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.309 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.309 [2024-11-20 18:55:19.616835] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:57.309 [2024-11-20 18:55:19.616880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660085 ] 00:17:57.567 [2024-11-20 18:55:19.681565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.568 [2024-11-20 18:55:19.721427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.568 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.568 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.568 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gS2czxIAT8 00:17:57.826 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.085 [2024-11-20 18:55:20.181387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.085 [2024-11-20 18:55:20.186263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.085 [2024-11-20 18:55:20.186701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839170 (107): Transport endpoint is not connected 00:17:58.085 [2024-11-20 18:55:20.187693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839170 (9): Bad file descriptor 00:17:58.085 [2024-11-20 18:55:20.188694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:58.085 [2024-11-20 18:55:20.188708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.085 [2024-11-20 18:55:20.188716] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:58.085 [2024-11-20 18:55:20.188727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:58.085 request: 00:17:58.085 { 00:17:58.085 "name": "TLSTEST", 00:17:58.085 "trtype": "tcp", 00:17:58.085 "traddr": "10.0.0.2", 00:17:58.085 "adrfam": "ipv4", 00:17:58.085 "trsvcid": "4420", 00:17:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.085 "prchk_reftag": false, 00:17:58.086 "prchk_guard": false, 00:17:58.086 "hdgst": false, 00:17:58.086 "ddgst": false, 00:17:58.086 "psk": "key0", 00:17:58.086 "allow_unrecognized_csi": false, 00:17:58.086 "method": "bdev_nvme_attach_controller", 00:17:58.086 "req_id": 1 00:17:58.086 } 00:17:58.086 Got JSON-RPC error response 00:17:58.086 response: 00:17:58.086 { 00:17:58.086 "code": -5, 00:17:58.086 "message": "Input/output error" 00:17:58.086 } 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3660085 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3660085 ']' 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3660085 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660085 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660085' 00:17:58.086 killing process with pid 3660085 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3660085 00:17:58.086 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.086 00:17:58.086 Latency(us) 00:17:58.086 [2024-11-20T17:55:20.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.086 [2024-11-20T17:55:20.411Z] =================================================================================================================== 00:17:58.086 [2024-11-20T17:55:20.411Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.086 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3660085 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cAAKj9WAS0 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cAAKj9WAS0 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cAAKj9WAS0 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cAAKj9WAS0 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3660128 00:17:58.345 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3660128 /var/tmp/bdevperf.sock 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3660128 ']' 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.346 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.346 [2024-11-20 18:55:20.472463] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:58.346 [2024-11-20 18:55:20.472515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660128 ] 00:17:58.346 [2024-11-20 18:55:20.546652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.346 [2024-11-20 18:55:20.584784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.604 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.604 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.604 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cAAKj9WAS0 00:17:58.604 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:58.863 [2024-11-20 18:55:21.055840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.863 [2024-11-20 18:55:21.060528] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:58.863 [2024-11-20 18:55:21.060551] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:58.863 [2024-11-20 18:55:21.060574] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.863 [2024-11-20 18:55:21.061265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200e170 (107): Transport endpoint is not connected 00:17:58.863 [2024-11-20 18:55:21.062258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200e170 (9): Bad file descriptor 00:17:58.863 [2024-11-20 18:55:21.063258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:58.863 [2024-11-20 18:55:21.063269] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.863 [2024-11-20 18:55:21.063276] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:58.863 [2024-11-20 18:55:21.063286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:58.863 request: 00:17:58.863 { 00:17:58.863 "name": "TLSTEST", 00:17:58.863 "trtype": "tcp", 00:17:58.863 "traddr": "10.0.0.2", 00:17:58.863 "adrfam": "ipv4", 00:17:58.863 "trsvcid": "4420", 00:17:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.863 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:58.863 "prchk_reftag": false, 00:17:58.863 "prchk_guard": false, 00:17:58.863 "hdgst": false, 00:17:58.863 "ddgst": false, 00:17:58.864 "psk": "key0", 00:17:58.864 "allow_unrecognized_csi": false, 00:17:58.864 "method": "bdev_nvme_attach_controller", 00:17:58.864 "req_id": 1 00:17:58.864 } 00:17:58.864 Got JSON-RPC error response 00:17:58.864 response: 00:17:58.864 { 00:17:58.864 "code": -5, 00:17:58.864 "message": "Input/output error" 00:17:58.864 } 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3660128 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3660128 ']' 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3660128 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660128 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660128' 00:17:58.864 killing process with pid 3660128 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3660128 00:17:58.864 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.864 00:17:58.864 Latency(us) 00:17:58.864 [2024-11-20T17:55:21.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.864 [2024-11-20T17:55:21.189Z] =================================================================================================================== 00:17:58.864 [2024-11-20T17:55:21.189Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.864 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3660128 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cAAKj9WAS0 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cAAKj9WAS0 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cAAKj9WAS0 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cAAKj9WAS0 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3660341 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3660341 /var/tmp/bdevperf.sock 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3660341 ']' 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.123 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.123 [2024-11-20 18:55:21.342285] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:59.123 [2024-11-20 18:55:21.342339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660341 ] 00:17:59.123 [2024-11-20 18:55:21.406453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.123 [2024-11-20 18:55:21.442502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.382 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.382 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.382 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cAAKj9WAS0 00:17:59.640 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:59.640 [2024-11-20 18:55:21.910172] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.640 [2024-11-20 18:55:21.920522] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:59.640 [2024-11-20 18:55:21.920544] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:59.640 [2024-11-20 18:55:21.920566] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:59.640 [2024-11-20 18:55:21.921480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5d170 (107): Transport endpoint is not connected 00:17:59.640 [2024-11-20 18:55:21.922473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5d170 (9): Bad file descriptor 00:17:59.640 [2024-11-20 18:55:21.923475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:59.640 [2024-11-20 18:55:21.923486] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:59.640 [2024-11-20 18:55:21.923493] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:59.640 [2024-11-20 18:55:21.923505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:59.640 request: 00:17:59.640 { 00:17:59.640 "name": "TLSTEST", 00:17:59.640 "trtype": "tcp", 00:17:59.640 "traddr": "10.0.0.2", 00:17:59.640 "adrfam": "ipv4", 00:17:59.640 "trsvcid": "4420", 00:17:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:59.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.640 "prchk_reftag": false, 00:17:59.640 "prchk_guard": false, 00:17:59.640 "hdgst": false, 00:17:59.640 "ddgst": false, 00:17:59.640 "psk": "key0", 00:17:59.640 "allow_unrecognized_csi": false, 00:17:59.640 "method": "bdev_nvme_attach_controller", 00:17:59.640 "req_id": 1 00:17:59.640 } 00:17:59.640 Got JSON-RPC error response 00:17:59.640 response: 00:17:59.640 { 00:17:59.640 "code": -5, 00:17:59.640 "message": "Input/output error" 00:17:59.640 } 00:17:59.640 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3660341 00:17:59.640 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3660341 ']' 00:17:59.640 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3660341 00:17:59.640 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.640 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.640 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660341 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660341' 00:17:59.899 killing process with pid 3660341 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3660341 00:17:59.899 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.899 00:17:59.899 Latency(us) 00:17:59.899 [2024-11-20T17:55:22.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.899 [2024-11-20T17:55:22.224Z] =================================================================================================================== 00:17:59.899 [2024-11-20T17:55:22.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3660341 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3660569 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3660569 /var/tmp/bdevperf.sock 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3660569 ']' 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.899 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.899 [2024-11-20 18:55:22.204174] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:17:59.899 [2024-11-20 18:55:22.204229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660569 ] 00:18:00.158 [2024-11-20 18:55:22.268855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.158 [2024-11-20 18:55:22.305373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.158 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.158 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.158 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:00.416 [2024-11-20 18:55:22.567733] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:00.416 [2024-11-20 18:55:22.567765] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:00.416 request: 00:18:00.416 { 00:18:00.416 "name": "key0", 00:18:00.416 "path": "", 00:18:00.416 "method": "keyring_file_add_key", 00:18:00.416 "req_id": 1 00:18:00.416 } 00:18:00.416 Got JSON-RPC error response 00:18:00.416 response: 00:18:00.416 { 00:18:00.416 "code": -1, 00:18:00.416 "message": "Operation not permitted" 00:18:00.416 } 00:18:00.416 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:00.675 [2024-11-20 18:55:22.752301] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.675 [2024-11-20 18:55:22.752333] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:00.675 request: 00:18:00.675 { 00:18:00.675 "name": "TLSTEST", 00:18:00.675 "trtype": "tcp", 00:18:00.675 "traddr": "10.0.0.2", 00:18:00.675 "adrfam": "ipv4", 00:18:00.675 "trsvcid": "4420", 00:18:00.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.675 "prchk_reftag": false, 00:18:00.675 "prchk_guard": false, 00:18:00.675 "hdgst": false, 00:18:00.675 "ddgst": false, 00:18:00.675 "psk": "key0", 00:18:00.675 "allow_unrecognized_csi": false, 00:18:00.675 "method": "bdev_nvme_attach_controller", 00:18:00.675 "req_id": 1 00:18:00.675 } 00:18:00.675 Got JSON-RPC error response 00:18:00.675 response: 00:18:00.675 { 00:18:00.675 "code": -126, 00:18:00.675 "message": "Required key not available" 00:18:00.675 } 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3660569 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3660569 ']' 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3660569 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660569 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:00.675 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660569' 00:18:00.676 killing process with pid 3660569 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3660569 00:18:00.676 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.676 00:18:00.676 Latency(us) 00:18:00.676 [2024-11-20T17:55:23.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.676 [2024-11-20T17:55:23.001Z] =================================================================================================================== 00:18:00.676 [2024-11-20T17:55:23.001Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3660569 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3655905 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3655905 ']' 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3655905 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.676 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3655905 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3655905' 00:18:00.935 killing process with pid 3655905 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3655905 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3655905 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.bkUCAVdjY3 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.bkUCAVdjY3 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3660645 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3660645 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3660645 ']' 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.935 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.195 [2024-11-20 18:55:23.307765] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:01.195 [2024-11-20 18:55:23.307816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.195 [2024-11-20 18:55:23.385697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.195 [2024-11-20 18:55:23.427352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.195 [2024-11-20 18:55:23.427388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.195 [2024-11-20 18:55:23.427395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.195 [2024-11-20 18:55:23.427401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.195 [2024-11-20 18:55:23.427407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.195 [2024-11-20 18:55:23.427974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.bkUCAVdjY3 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.bkUCAVdjY3 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.454 [2024-11-20 18:55:23.736882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.454 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.713 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.972 [2024-11-20 18:55:24.129891] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.972 [2024-11-20 18:55:24.130100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.972 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.230 malloc0 00:18:02.230 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.230 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:02.488 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bkUCAVdjY3 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bkUCAVdjY3 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3661065 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3661065 /var/tmp/bdevperf.sock 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3661065 ']' 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.747 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.747 [2024-11-20 18:55:24.966528] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:02.747 [2024-11-20 18:55:24.966579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661065 ] 00:18:02.747 [2024-11-20 18:55:25.043073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.006 [2024-11-20 18:55:25.083644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.006 18:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.006 18:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:03.006 18:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:03.265 18:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.265 [2024-11-20 18:55:25.531576] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.523 TLSTESTn1 00:18:03.523 18:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.523 Running I/O for 10 seconds... 00:18:05.835 5515.00 IOPS, 21.54 MiB/s [2024-11-20T17:55:28.728Z] 5572.50 IOPS, 21.77 MiB/s [2024-11-20T17:55:30.104Z] 5579.67 IOPS, 21.80 MiB/s [2024-11-20T17:55:31.041Z] 5602.00 IOPS, 21.88 MiB/s [2024-11-20T17:55:31.976Z] 5561.40 IOPS, 21.72 MiB/s [2024-11-20T17:55:32.910Z] 5523.33 IOPS, 21.58 MiB/s [2024-11-20T17:55:33.846Z] 5510.29 IOPS, 21.52 MiB/s [2024-11-20T17:55:34.831Z] 5512.88 IOPS, 21.53 MiB/s [2024-11-20T17:55:35.851Z] 5398.56 IOPS, 21.09 MiB/s [2024-11-20T17:55:35.851Z] 5296.00 IOPS, 20.69 MiB/s 00:18:13.526 Latency(us) 00:18:13.526 [2024-11-20T17:55:35.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.526 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.526 Verification LBA range: start 0x0 length 0x2000 00:18:13.526 TLSTESTn1 : 10.02 5297.32 20.69 0.00 0.00 24123.00 5710.99 27213.04 00:18:13.526 [2024-11-20T17:55:35.851Z] =================================================================================================================== 00:18:13.526 [2024-11-20T17:55:35.851Z] Total : 5297.32 20.69 0.00 0.00 24123.00 5710.99 27213.04 00:18:13.526 { 00:18:13.526 "results": [ 00:18:13.526 { 00:18:13.526 "job": "TLSTESTn1", 00:18:13.526 "core_mask": "0x4", 00:18:13.526 "workload": "verify", 00:18:13.526 "status": "finished", 00:18:13.526 "verify_range": { 00:18:13.526 "start": 0, 00:18:13.526 "length": 8192 00:18:13.526 }, 00:18:13.526 "queue_depth": 128, 00:18:13.526 "io_size": 4096, 00:18:13.526 "runtime": 10.021482, 00:18:13.526 "iops": 5297.320296538975, 00:18:13.526 "mibps": 20.69265740835537, 00:18:13.526 "io_failed": 0, 00:18:13.526 "io_timeout": 0, 00:18:13.526 "avg_latency_us": 24123.002189505638, 00:18:13.526 "min_latency_us": 5710.994285714286, 00:18:13.526 "max_latency_us": 27213.04380952381 00:18:13.526 } 00:18:13.526 ], 00:18:13.526 "core_count": 1 00:18:13.526 } 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3661065 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3661065 ']' 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3661065 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3661065 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3661065' 00:18:13.526 killing process with pid 3661065 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3661065 00:18:13.526 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.526 00:18:13.526 Latency(us) 00:18:13.526 [2024-11-20T17:55:35.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.526 [2024-11-20T17:55:35.851Z] =================================================================================================================== 00:18:13.526 [2024-11-20T17:55:35.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.526 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3661065 00:18:13.785 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.bkUCAVdjY3 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bkUCAVdjY3 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bkUCAVdjY3 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bkUCAVdjY3 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bkUCAVdjY3 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3662777 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3662777 /var/tmp/bdevperf.sock 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3662777 ']' 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.785 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.785 [2024-11-20 18:55:36.054535] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:13.785 [2024-11-20 18:55:36.054592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662777 ] 00:18:14.044 [2024-11-20 18:55:36.126569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.044 [2024-11-20 18:55:36.163911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.044 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.044 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.044 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:14.303 [2024-11-20 18:55:36.423011] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bkUCAVdjY3': 0100666 00:18:14.303 [2024-11-20 18:55:36.423043] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:14.303 request: 00:18:14.303 { 00:18:14.303 "name": "key0", 00:18:14.303 "path": "/tmp/tmp.bkUCAVdjY3", 00:18:14.303 "method": "keyring_file_add_key", 00:18:14.303 "req_id": 1 00:18:14.303 } 00:18:14.303 Got JSON-RPC error response 00:18:14.303 response: 00:18:14.303 { 00:18:14.303 "code": -1, 00:18:14.303 "message": "Operation not permitted" 00:18:14.303 } 00:18:14.303 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.303 [2024-11-20 18:55:36.615582] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.303 [2024-11-20 18:55:36.615610] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:14.303 request: 00:18:14.303 { 00:18:14.303 "name": "TLSTEST", 00:18:14.303 "trtype": "tcp", 00:18:14.303 "traddr": "10.0.0.2", 00:18:14.303 "adrfam": "ipv4", 00:18:14.303 "trsvcid": "4420", 00:18:14.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.303 "prchk_reftag": false, 00:18:14.303 "prchk_guard": false, 00:18:14.303 "hdgst": false, 00:18:14.303 "ddgst": false, 00:18:14.303 "psk": "key0", 00:18:14.303 "allow_unrecognized_csi": false, 00:18:14.303 "method": "bdev_nvme_attach_controller", 00:18:14.303 "req_id": 1 00:18:14.303 } 00:18:14.303 Got JSON-RPC error response 00:18:14.303 response: 00:18:14.303 { 00:18:14.303 "code": -126, 00:18:14.303 "message": "Required key not available" 00:18:14.303 } 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3662777 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3662777 ']' 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3662777 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3662777 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3662777' 00:18:14.562 killing process with pid 3662777 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3662777 00:18:14.562 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.562 00:18:14.562 Latency(us) 00:18:14.562 [2024-11-20T17:55:36.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.562 [2024-11-20T17:55:36.887Z] =================================================================================================================== 00:18:14.562 [2024-11-20T17:55:36.887Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3662777 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3660645 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3660645 ']' 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3660645 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.562 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660645 00:18:14.821 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:14.821 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:14.821 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660645' 00:18:14.821 killing process with pid 3660645 00:18:14.821 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3660645 00:18:14.821 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3660645 00:18:14.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:14.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3662941 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3662941 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3662941 ']' 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.822 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.822 [2024-11-20 18:55:37.127702] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:14.822 [2024-11-20 18:55:37.127750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.080 [2024-11-20 18:55:37.204815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.080 [2024-11-20 18:55:37.244592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.080 [2024-11-20 18:55:37.244634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.080 [2024-11-20 18:55:37.244642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.080 [2024-11-20 18:55:37.244647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.080 [2024-11-20 18:55:37.244652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.080 [2024-11-20 18:55:37.245180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.bkUCAVdjY3 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.bkUCAVdjY3 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.bkUCAVdjY3 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.bkUCAVdjY3 00:18:15.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.339 [2024-11-20 18:55:37.560580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.339 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.597 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.903 [2024-11-20 18:55:37.937538] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.903 [2024-11-20 18:55:37.937740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.903 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.903 malloc0 00:18:15.903 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.161 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:16.420 [2024-11-20 18:55:38.518972] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bkUCAVdjY3': 0100666 00:18:16.420 [2024-11-20 18:55:38.518999] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:16.420 request: 00:18:16.420 { 00:18:16.420 "name": "key0", 00:18:16.420 "path": "/tmp/tmp.bkUCAVdjY3", 00:18:16.420 "method": "keyring_file_add_key", 00:18:16.420 "req_id": 1 00:18:16.420 } 00:18:16.420 Got JSON-RPC error response 00:18:16.420 response: 00:18:16.420 { 00:18:16.420 "code": -1, 00:18:16.420 "message": "Operation not permitted" 00:18:16.420 } 00:18:16.420 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.420 [2024-11-20 18:55:38.715504] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:16.420 [2024-11-20 18:55:38.715539] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:16.420 request: 00:18:16.420 { 00:18:16.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.420 "host": "nqn.2016-06.io.spdk:host1", 00:18:16.420 "psk": "key0", 00:18:16.420 "method": "nvmf_subsystem_add_host", 00:18:16.420 "req_id": 1 00:18:16.420 } 00:18:16.420 Got JSON-RPC error response 00:18:16.420 response: 00:18:16.420 { 00:18:16.420 "code": -32603, 00:18:16.420 "message": "Internal error" 00:18:16.420 } 00:18:16.420 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3662941 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3662941 ']' 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3662941 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3662941 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3662941' 00:18:16.680 killing process with pid 3662941 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3662941 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3662941 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.bkUCAVdjY3 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3663386 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3663386 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3663386 ']' 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.680 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.939 [2024-11-20 18:55:39.031641] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:16.939 [2024-11-20 18:55:39.031697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.939 [2024-11-20 18:55:39.110128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.939 [2024-11-20 18:55:39.147366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.939 [2024-11-20 18:55:39.147403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.939 [2024-11-20 18:55:39.147410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.939 [2024-11-20 18:55:39.147416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.939 [2024-11-20 18:55:39.147421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.939 [2024-11-20 18:55:39.147964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.939 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.939 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.939 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.939 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.939 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.198 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.198 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.bkUCAVdjY3 00:18:17.198 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.bkUCAVdjY3 00:18:17.198 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:17.198 [2024-11-20 18:55:39.450950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.198 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:17.457 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:17.715 [2024-11-20 18:55:39.839948] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.716 [2024-11-20 18:55:39.840171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.716 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.974 malloc0 00:18:17.974 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.974 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:18.233 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3663684 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3663684 /var/tmp/bdevperf.sock 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3663684 ']' 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.492 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.492 [2024-11-20 18:55:40.708092] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:18.492 [2024-11-20 18:55:40.708144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663684 ] 00:18:18.492 [2024-11-20 18:55:40.780560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.751 [2024-11-20 18:55:40.823242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.319 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.319 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.319 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:19.577 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.836 [2024-11-20 18:55:41.913677] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.836 TLSTESTn1 00:18:19.836 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:20.094 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:20.094 "subsystems": [ 00:18:20.094 { 00:18:20.094 "subsystem": "keyring", 00:18:20.094 "config": [ 00:18:20.094 { 00:18:20.095 "method": "keyring_file_add_key", 00:18:20.095 "params": { 00:18:20.095 "name": "key0", 00:18:20.095 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:20.095 } 00:18:20.095 } 00:18:20.095 ] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "iobuf", 00:18:20.095 "config": [ 00:18:20.095 { 00:18:20.095 "method": "iobuf_set_options", 00:18:20.095 "params": { 00:18:20.095 "small_pool_count": 8192, 00:18:20.095 "large_pool_count": 1024, 00:18:20.095 "small_bufsize": 8192, 00:18:20.095 "large_bufsize": 135168, 00:18:20.095 "enable_numa": false 00:18:20.095 } 00:18:20.095 } 00:18:20.095 ] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "sock", 00:18:20.095 "config": [ 00:18:20.095 { 00:18:20.095 "method": "sock_set_default_impl", 00:18:20.095 "params": { 00:18:20.095 "impl_name": "posix" 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "sock_impl_set_options", 00:18:20.095 "params": { 00:18:20.095 "impl_name": "ssl", 00:18:20.095 "recv_buf_size": 4096, 00:18:20.095 "send_buf_size": 4096, 00:18:20.095 "enable_recv_pipe": true, 00:18:20.095 "enable_quickack": false, 00:18:20.095 "enable_placement_id": 0, 00:18:20.095 "enable_zerocopy_send_server": true, 00:18:20.095 "enable_zerocopy_send_client": false, 00:18:20.095 "zerocopy_threshold": 0, 00:18:20.095 "tls_version": 0, 00:18:20.095 "enable_ktls": false 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "sock_impl_set_options", 00:18:20.095 "params": { 00:18:20.095 "impl_name": "posix", 00:18:20.095 "recv_buf_size": 2097152, 00:18:20.095 "send_buf_size": 2097152, 00:18:20.095 "enable_recv_pipe": true, 00:18:20.095 "enable_quickack": false, 00:18:20.095 "enable_placement_id": 0, 00:18:20.095 "enable_zerocopy_send_server": true, 00:18:20.095 "enable_zerocopy_send_client": false, 00:18:20.095 "zerocopy_threshold": 0, 00:18:20.095 "tls_version": 0, 00:18:20.095 "enable_ktls": false 00:18:20.095 } 00:18:20.095 } 00:18:20.095 ] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "vmd", 00:18:20.095 "config": [] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "accel", 00:18:20.095 "config": [ 00:18:20.095 { 00:18:20.095 "method": "accel_set_options", 00:18:20.095 "params": { 00:18:20.095 "small_cache_size": 128, 00:18:20.095 "large_cache_size": 16, 00:18:20.095 "task_count": 2048, 00:18:20.095 "sequence_count": 2048, 00:18:20.095 "buf_count": 2048 00:18:20.095 } 00:18:20.095 } 00:18:20.095 ] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "bdev", 00:18:20.095 "config": [ 00:18:20.095 { 00:18:20.095 "method": "bdev_set_options", 00:18:20.095 "params": { 00:18:20.095 "bdev_io_pool_size": 65535, 00:18:20.095 "bdev_io_cache_size": 256, 00:18:20.095 "bdev_auto_examine": true, 00:18:20.095 "iobuf_small_cache_size": 128, 00:18:20.095 "iobuf_large_cache_size": 16 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "bdev_raid_set_options", 00:18:20.095 "params": { 00:18:20.095 "process_window_size_kb": 1024, 00:18:20.095 "process_max_bandwidth_mb_sec": 0 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "bdev_iscsi_set_options", 00:18:20.095 "params": { 00:18:20.095 "timeout_sec": 30 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "bdev_nvme_set_options", 00:18:20.095 "params": { 00:18:20.095 "action_on_timeout": "none", 00:18:20.095 "timeout_us": 0, 00:18:20.095 "timeout_admin_us": 0, 00:18:20.095 "keep_alive_timeout_ms": 10000, 00:18:20.095 "arbitration_burst": 0, 00:18:20.095 "low_priority_weight": 0, 00:18:20.095 "medium_priority_weight": 0, 00:18:20.095 "high_priority_weight": 0, 00:18:20.095 "nvme_adminq_poll_period_us": 10000, 00:18:20.095 "nvme_ioq_poll_period_us": 0, 00:18:20.095 "io_queue_requests": 0, 00:18:20.095 "delay_cmd_submit": true, 00:18:20.095 "transport_retry_count": 4, 00:18:20.095 "bdev_retry_count": 3, 00:18:20.095 "transport_ack_timeout": 0, 00:18:20.095 "ctrlr_loss_timeout_sec": 0, 00:18:20.095 "reconnect_delay_sec": 0, 00:18:20.095 "fast_io_fail_timeout_sec": 0, 00:18:20.095 "disable_auto_failback": false, 00:18:20.095 "generate_uuids": false, 00:18:20.095 "transport_tos": 0, 00:18:20.095 "nvme_error_stat": false, 00:18:20.095 "rdma_srq_size": 0, 00:18:20.095 "io_path_stat": false, 00:18:20.095 "allow_accel_sequence": false, 00:18:20.095 "rdma_max_cq_size": 0, 00:18:20.095 "rdma_cm_event_timeout_ms": 0, 00:18:20.095 "dhchap_digests": [ 00:18:20.095 "sha256", 00:18:20.095 "sha384", 00:18:20.095 "sha512" 00:18:20.095 ], 00:18:20.095 "dhchap_dhgroups": [ 00:18:20.095 "null", 00:18:20.095 "ffdhe2048", 00:18:20.095 "ffdhe3072", 00:18:20.095 "ffdhe4096", 00:18:20.095 "ffdhe6144", 00:18:20.095 "ffdhe8192" 00:18:20.095 ] 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "bdev_nvme_set_hotplug", 00:18:20.095 "params": { 00:18:20.095 "period_us": 100000, 00:18:20.095 "enable": false 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "bdev_malloc_create", 00:18:20.095 "params": { 00:18:20.095 "name": "malloc0", 00:18:20.095 "num_blocks": 8192, 00:18:20.095 "block_size": 4096, 00:18:20.095 "physical_block_size": 4096, 00:18:20.095 "uuid": "dae1a71b-6e3c-459c-9068-b7153f4d7e1c", 00:18:20.095 "optimal_io_boundary": 0, 00:18:20.095 "md_size": 0, 00:18:20.095 "dif_type": 0, 00:18:20.095 "dif_is_head_of_md": false, 00:18:20.095 "dif_pi_format": 0 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "bdev_wait_for_examine" 00:18:20.095 } 00:18:20.095 ] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "nbd", 00:18:20.095 "config": [] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "scheduler", 00:18:20.095 "config": [ 00:18:20.095 { 00:18:20.095 "method": "framework_set_scheduler", 00:18:20.095 "params": { 00:18:20.095 "name": "static" 00:18:20.095 } 00:18:20.095 } 00:18:20.095 ] 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "subsystem": "nvmf", 00:18:20.095 "config": [ 00:18:20.095 { 00:18:20.095 "method": "nvmf_set_config", 00:18:20.095 "params": { 00:18:20.095 "discovery_filter": "match_any", 00:18:20.095 "admin_cmd_passthru": { 00:18:20.095 "identify_ctrlr": false 00:18:20.095 }, 00:18:20.095 "dhchap_digests": [ 00:18:20.095 "sha256", 00:18:20.095 "sha384", 00:18:20.095 "sha512" 00:18:20.095 ], 00:18:20.095 "dhchap_dhgroups": [ 00:18:20.095 "null", 00:18:20.095 "ffdhe2048", 00:18:20.095 "ffdhe3072", 00:18:20.095 "ffdhe4096", 00:18:20.095 "ffdhe6144", 00:18:20.095 "ffdhe8192" 00:18:20.095 ] 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "nvmf_set_max_subsystems", 00:18:20.095 "params": { 00:18:20.095 "max_subsystems": 1024 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "nvmf_set_crdt", 00:18:20.095 "params": { 00:18:20.095 "crdt1": 0, 00:18:20.095 "crdt2": 0, 00:18:20.095 "crdt3": 0 00:18:20.095 } 00:18:20.095 }, 00:18:20.095 { 00:18:20.095 "method": "nvmf_create_transport", 00:18:20.095 "params": { 00:18:20.095 "trtype": "TCP", 00:18:20.095 "max_queue_depth": 128, 00:18:20.095 "max_io_qpairs_per_ctrlr": 127, 00:18:20.095 "in_capsule_data_size": 4096, 00:18:20.095 "max_io_size": 131072, 00:18:20.096 "io_unit_size": 131072, 00:18:20.096 "max_aq_depth": 128, 00:18:20.096 "num_shared_buffers": 511, 00:18:20.096 "buf_cache_size": 4294967295, 00:18:20.096 "dif_insert_or_strip": false, 00:18:20.096 "zcopy": false, 00:18:20.096 "c2h_success": false, 00:18:20.096 "sock_priority": 0, 00:18:20.096 "abort_timeout_sec": 1, 00:18:20.096 "ack_timeout": 0, 00:18:20.096 "data_wr_pool_size": 0 00:18:20.096 } 00:18:20.096 }, 00:18:20.096 { 00:18:20.096 "method": "nvmf_create_subsystem", 00:18:20.096 "params": { 00:18:20.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.096 "allow_any_host": false, 00:18:20.096 "serial_number": "SPDK00000000000001", 00:18:20.096 "model_number": "SPDK bdev Controller", 00:18:20.096 "max_namespaces": 10, 00:18:20.096 "min_cntlid": 1, 00:18:20.096 "max_cntlid": 65519, 00:18:20.096 "ana_reporting": false 00:18:20.096 } 00:18:20.096 }, 00:18:20.096 { 00:18:20.096 "method": "nvmf_subsystem_add_host", 00:18:20.096 "params": { 00:18:20.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.096 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.096 "psk": "key0" 00:18:20.096 } 00:18:20.096 }, 00:18:20.096 { 00:18:20.096 "method": "nvmf_subsystem_add_ns", 00:18:20.096 "params": { 00:18:20.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.096 "namespace": { 00:18:20.096 "nsid": 1, 00:18:20.096 "bdev_name": "malloc0", 00:18:20.096 "nguid": "DAE1A71B6E3C459C9068B7153F4D7E1C", 00:18:20.096 "uuid": "dae1a71b-6e3c-459c-9068-b7153f4d7e1c", 00:18:20.096 "no_auto_visible": false 00:18:20.096 } 00:18:20.096 } 00:18:20.096 }, 00:18:20.096 { 00:18:20.096 "method": "nvmf_subsystem_add_listener", 00:18:20.096 "params": { 00:18:20.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.096 "listen_address": { 00:18:20.096 "trtype": "TCP", 00:18:20.096 "adrfam": "IPv4", 00:18:20.096 "traddr": "10.0.0.2", 00:18:20.096 "trsvcid": "4420" 00:18:20.096 }, 00:18:20.096 "secure_channel": true 00:18:20.096 } 00:18:20.096 } 00:18:20.096 ] 00:18:20.096 } 00:18:20.096 ] 00:18:20.096 }' 00:18:20.096 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:20.355 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:20.355 "subsystems": [ 00:18:20.355 { 00:18:20.355 "subsystem": "keyring", 00:18:20.355 "config": [ 00:18:20.355 { 00:18:20.355 "method": "keyring_file_add_key", 00:18:20.355 "params": { 00:18:20.355 "name": "key0", 00:18:20.355 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:20.355 } 00:18:20.355 } 00:18:20.355 ] 00:18:20.355 }, 00:18:20.355 { 00:18:20.355 "subsystem": "iobuf", 00:18:20.355 "config": [ 00:18:20.355 { 00:18:20.355 "method": "iobuf_set_options", 00:18:20.355 "params": { 00:18:20.355 "small_pool_count": 8192, 00:18:20.355 "large_pool_count": 1024, 00:18:20.355 "small_bufsize": 8192, 00:18:20.355 "large_bufsize": 135168, 00:18:20.355 "enable_numa": false 00:18:20.355 } 00:18:20.355 } 00:18:20.355 ] 00:18:20.355 }, 00:18:20.355 { 00:18:20.355 "subsystem": "sock", 00:18:20.355 "config": [ 00:18:20.355 { 00:18:20.355 "method": "sock_set_default_impl", 00:18:20.355 "params": { 00:18:20.355 "impl_name": "posix" 00:18:20.355 } 00:18:20.355 }, 00:18:20.355 { 00:18:20.355 "method": "sock_impl_set_options", 00:18:20.355 "params": { 00:18:20.355 "impl_name": "ssl", 00:18:20.355 "recv_buf_size": 4096, 00:18:20.355 "send_buf_size": 4096, 00:18:20.355 "enable_recv_pipe": true, 00:18:20.355 "enable_quickack": false, 00:18:20.355 "enable_placement_id": 0, 00:18:20.355 "enable_zerocopy_send_server": true, 00:18:20.355 "enable_zerocopy_send_client": false, 00:18:20.355 "zerocopy_threshold": 0, 00:18:20.355 "tls_version": 0, 00:18:20.355 "enable_ktls": false 00:18:20.355 } 00:18:20.355 }, 00:18:20.355 { 00:18:20.355 "method": "sock_impl_set_options", 00:18:20.355 "params": { 00:18:20.355 "impl_name": "posix", 00:18:20.355 "recv_buf_size": 2097152, 00:18:20.355 "send_buf_size": 2097152, 00:18:20.355 "enable_recv_pipe": true, 00:18:20.355 "enable_quickack": false, 00:18:20.355 "enable_placement_id": 0, 00:18:20.355 "enable_zerocopy_send_server": true, 00:18:20.355 "enable_zerocopy_send_client": false, 00:18:20.355 "zerocopy_threshold": 0, 00:18:20.355 "tls_version": 0, 00:18:20.355 "enable_ktls": false 00:18:20.355 } 00:18:20.355 } 00:18:20.355 ] 00:18:20.355 }, 00:18:20.355 { 00:18:20.355 "subsystem": "vmd", 00:18:20.355 "config": [] 00:18:20.355 }, 00:18:20.355 { 00:18:20.355 "subsystem": "accel", 00:18:20.355 "config": [ 00:18:20.355 { 00:18:20.355 "method": "accel_set_options", 00:18:20.355 "params": { 00:18:20.355 "small_cache_size": 128, 00:18:20.355 "large_cache_size": 16, 00:18:20.355 "task_count": 2048, 00:18:20.355 "sequence_count": 2048, 00:18:20.355 "buf_count": 2048 00:18:20.355 } 00:18:20.355 } 00:18:20.355 ] 00:18:20.355 }, 00:18:20.355 { 00:18:20.355 "subsystem": "bdev", 00:18:20.355 "config": [ 00:18:20.355 { 00:18:20.355 "method": "bdev_set_options", 00:18:20.355 "params": { 00:18:20.355 "bdev_io_pool_size": 65535, 00:18:20.355 "bdev_io_cache_size": 256, 00:18:20.355 "bdev_auto_examine": true, 00:18:20.355 "iobuf_small_cache_size": 128, 00:18:20.356 "iobuf_large_cache_size": 16 00:18:20.356 } 00:18:20.356 }, 00:18:20.356 { 00:18:20.356 "method": "bdev_raid_set_options", 00:18:20.356 "params": { 00:18:20.356 "process_window_size_kb": 1024, 00:18:20.356 "process_max_bandwidth_mb_sec": 0 00:18:20.356 } 00:18:20.356 }, 00:18:20.356 { 00:18:20.356 "method": "bdev_iscsi_set_options", 00:18:20.356 "params": { 00:18:20.356 "timeout_sec": 30 00:18:20.356 } 00:18:20.356 }, 00:18:20.356 { 00:18:20.356 "method": "bdev_nvme_set_options", 00:18:20.356 "params": { 00:18:20.356 "action_on_timeout": "none", 00:18:20.356 "timeout_us": 0, 00:18:20.356 "timeout_admin_us": 0, 00:18:20.356 "keep_alive_timeout_ms": 10000, 00:18:20.356 "arbitration_burst": 0, 00:18:20.356 "low_priority_weight": 0, 00:18:20.356 "medium_priority_weight": 0, 00:18:20.356 "high_priority_weight": 0, 00:18:20.356 "nvme_adminq_poll_period_us": 10000, 00:18:20.356 "nvme_ioq_poll_period_us": 0, 00:18:20.356 "io_queue_requests": 512, 00:18:20.356 "delay_cmd_submit": true, 00:18:20.356 "transport_retry_count": 4, 00:18:20.356 "bdev_retry_count": 3, 00:18:20.356 "transport_ack_timeout": 0, 00:18:20.356 "ctrlr_loss_timeout_sec": 0, 00:18:20.356 "reconnect_delay_sec": 0, 00:18:20.356 "fast_io_fail_timeout_sec": 0, 00:18:20.356 "disable_auto_failback": false, 00:18:20.356 "generate_uuids": false, 00:18:20.356 "transport_tos": 0, 00:18:20.356 "nvme_error_stat": false, 00:18:20.356 "rdma_srq_size": 0, 00:18:20.356 "io_path_stat": false, 00:18:20.356 "allow_accel_sequence": false, 00:18:20.356 "rdma_max_cq_size": 0, 00:18:20.356 "rdma_cm_event_timeout_ms": 0, 00:18:20.356 "dhchap_digests": [ 00:18:20.356 "sha256", 00:18:20.356 "sha384", 00:18:20.356 "sha512" 00:18:20.356 ], 00:18:20.356 "dhchap_dhgroups": [ 00:18:20.356 "null", 00:18:20.356 "ffdhe2048", 00:18:20.356 "ffdhe3072", 00:18:20.356 "ffdhe4096", 00:18:20.356 "ffdhe6144", 00:18:20.356 "ffdhe8192" 00:18:20.356 ] 00:18:20.356 } 00:18:20.356 }, 00:18:20.356 { 00:18:20.356 "method": "bdev_nvme_attach_controller", 00:18:20.356 "params": { 00:18:20.356 "name": "TLSTEST", 00:18:20.356 "trtype": "TCP", 00:18:20.356 "adrfam": "IPv4", 00:18:20.356 "traddr": "10.0.0.2", 00:18:20.356 "trsvcid": "4420", 00:18:20.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.356 "prchk_reftag": false, 00:18:20.356 "prchk_guard": false, 00:18:20.356 "ctrlr_loss_timeout_sec": 0, 00:18:20.356 "reconnect_delay_sec": 0, 00:18:20.356 "fast_io_fail_timeout_sec": 0, 00:18:20.356 "psk": "key0", 00:18:20.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.356 "hdgst": false, 00:18:20.356 "ddgst": false, 00:18:20.356 "multipath": "multipath" 00:18:20.356 } 00:18:20.356 }, 00:18:20.356 { 00:18:20.356 "method": "bdev_nvme_set_hotplug", 00:18:20.356 "params": { 00:18:20.356 "period_us": 100000, 00:18:20.356 "enable": false 00:18:20.356 } 00:18:20.356 }, 00:18:20.356 { 00:18:20.356 "method": "bdev_wait_for_examine" 00:18:20.356 } 00:18:20.356 ] 00:18:20.356 }, 00:18:20.356 { 00:18:20.356 "subsystem": "nbd", 00:18:20.356 "config": [] 00:18:20.356 } 00:18:20.356 ] 00:18:20.356 }' 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3663684 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3663684 ']' 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3663684 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663684 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663684' 00:18:20.356 killing process with pid 3663684 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3663684 00:18:20.356 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.356 00:18:20.356 Latency(us) 00:18:20.356 [2024-11-20T17:55:42.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.356 [2024-11-20T17:55:42.681Z] =================================================================================================================== 00:18:20.356 [2024-11-20T17:55:42.681Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.356 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3663684 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3663386 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3663386 ']' 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3663386 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663386 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663386' 00:18:20.616 killing process with pid 3663386 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3663386 00:18:20.616 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3663386 00:18:20.875 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:20.875 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:20.875 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.875 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:20.875 "subsystems": [ 00:18:20.875 { 00:18:20.875 "subsystem": "keyring", 00:18:20.875 "config": [ 00:18:20.875 { 00:18:20.875 "method": "keyring_file_add_key", 00:18:20.875 "params": { 00:18:20.875 "name": "key0", 00:18:20.875 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:20.875 } 00:18:20.875 } 00:18:20.875 ] 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "subsystem": "iobuf", 00:18:20.875 "config": [ 00:18:20.875 { 00:18:20.875 "method": "iobuf_set_options", 00:18:20.875 "params": { 00:18:20.875 "small_pool_count": 8192, 00:18:20.875 "large_pool_count": 1024, 00:18:20.875 "small_bufsize": 8192, 00:18:20.875 "large_bufsize": 135168, 00:18:20.875 "enable_numa": false 00:18:20.875 } 00:18:20.875 } 00:18:20.875 ] 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "subsystem": "sock", 00:18:20.875 "config": [ 00:18:20.875 { 00:18:20.875 "method": "sock_set_default_impl", 00:18:20.875 "params": { 00:18:20.875 "impl_name": "posix" 00:18:20.875 } 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "method": "sock_impl_set_options", 00:18:20.875 "params": { 00:18:20.875 "impl_name": "ssl", 00:18:20.875 "recv_buf_size": 4096, 00:18:20.875 "send_buf_size": 4096, 00:18:20.875 "enable_recv_pipe": true, 00:18:20.875 "enable_quickack": false, 00:18:20.875 "enable_placement_id": 0, 00:18:20.875 "enable_zerocopy_send_server": true, 00:18:20.875 "enable_zerocopy_send_client": false, 00:18:20.875 "zerocopy_threshold": 0, 00:18:20.875 "tls_version": 0, 00:18:20.875 "enable_ktls": false 00:18:20.875 } 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "method": "sock_impl_set_options", 00:18:20.875 "params": { 00:18:20.875 "impl_name": "posix", 00:18:20.875 "recv_buf_size": 2097152, 00:18:20.875 "send_buf_size": 2097152, 00:18:20.875 "enable_recv_pipe": true, 00:18:20.875 "enable_quickack": false, 00:18:20.875 "enable_placement_id": 0, 00:18:20.875 "enable_zerocopy_send_server": true, 00:18:20.875 "enable_zerocopy_send_client": false, 00:18:20.875 "zerocopy_threshold": 0, 00:18:20.875 "tls_version": 0, 00:18:20.875 "enable_ktls": false 00:18:20.875 } 00:18:20.875 } 00:18:20.875 ] 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "subsystem": "vmd", 00:18:20.875 "config": [] 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "subsystem": "accel", 00:18:20.875 "config": [ 00:18:20.875 { 00:18:20.875 "method": "accel_set_options", 00:18:20.875 "params": { 00:18:20.875 "small_cache_size": 128, 00:18:20.875 "large_cache_size": 16, 00:18:20.875 "task_count": 2048, 00:18:20.875 "sequence_count": 2048, 00:18:20.875 "buf_count": 2048 00:18:20.875 } 00:18:20.875 } 00:18:20.875 ] 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "subsystem": "bdev", 00:18:20.875 "config": [ 00:18:20.875 { 00:18:20.875 "method": "bdev_set_options", 00:18:20.875 "params": { 00:18:20.875 "bdev_io_pool_size": 65535, 00:18:20.875 "bdev_io_cache_size": 256, 00:18:20.875 "bdev_auto_examine": true, 00:18:20.875 "iobuf_small_cache_size": 128, 00:18:20.875 "iobuf_large_cache_size": 16 00:18:20.875 } 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "method": "bdev_raid_set_options", 00:18:20.875 "params": { 00:18:20.875 "process_window_size_kb": 1024, 00:18:20.875 "process_max_bandwidth_mb_sec": 0 00:18:20.875 } 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "method": "bdev_iscsi_set_options", 00:18:20.875 "params": { 00:18:20.875 "timeout_sec": 30 00:18:20.875 } 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "method": "bdev_nvme_set_options", 00:18:20.875 "params": { 00:18:20.875 "action_on_timeout": "none", 00:18:20.875 "timeout_us": 0, 00:18:20.875 "timeout_admin_us": 0, 00:18:20.875 "keep_alive_timeout_ms": 10000, 00:18:20.875 "arbitration_burst": 0, 00:18:20.875 "low_priority_weight": 0, 00:18:20.875 "medium_priority_weight": 0, 00:18:20.875 "high_priority_weight": 0, 00:18:20.875 "nvme_adminq_poll_period_us": 10000, 00:18:20.875 "nvme_ioq_poll_period_us": 0, 00:18:20.875 "io_queue_requests": 0, 00:18:20.875 "delay_cmd_submit": true, 00:18:20.875 "transport_retry_count": 4, 00:18:20.875 "bdev_retry_count": 3, 00:18:20.875 "transport_ack_timeout": 0, 00:18:20.875 "ctrlr_loss_timeout_sec": 0, 00:18:20.875 "reconnect_delay_sec": 0, 00:18:20.875 "fast_io_fail_timeout_sec": 0, 00:18:20.875 "disable_auto_failback": false, 00:18:20.875 "generate_uuids": false, 00:18:20.875 "transport_tos": 0, 00:18:20.875 "nvme_error_stat": false, 00:18:20.875 "rdma_srq_size": 0, 00:18:20.875 "io_path_stat": false, 00:18:20.875 "allow_accel_sequence": false, 00:18:20.875 "rdma_max_cq_size": 0, 00:18:20.875 "rdma_cm_event_timeout_ms": 0, 00:18:20.875 "dhchap_digests": [ 00:18:20.875 "sha256", 00:18:20.875 "sha384", 00:18:20.875 "sha512" 00:18:20.875 ], 00:18:20.875 "dhchap_dhgroups": [ 00:18:20.875 "null", 00:18:20.875 "ffdhe2048", 00:18:20.875 "ffdhe3072", 00:18:20.875 "ffdhe4096", 00:18:20.875 "ffdhe6144", 00:18:20.875 "ffdhe8192" 00:18:20.875 ] 00:18:20.875 } 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "method": "bdev_nvme_set_hotplug", 00:18:20.875 "params": { 00:18:20.875 "period_us": 100000, 00:18:20.875 "enable": false 00:18:20.875 } 00:18:20.875 }, 00:18:20.875 { 00:18:20.875 "method": "bdev_malloc_create", 00:18:20.875 "params": { 00:18:20.875 "name": "malloc0", 00:18:20.875 "num_blocks": 8192, 00:18:20.875 "block_size": 4096, 00:18:20.875 "physical_block_size": 4096, 00:18:20.876 "uuid": "dae1a71b-6e3c-459c-9068-b7153f4d7e1c", 00:18:20.876 "optimal_io_boundary": 0, 00:18:20.876 "md_size": 0, 00:18:20.876 "dif_type": 0, 00:18:20.876 "dif_is_head_of_md": false, 00:18:20.876 "dif_pi_format": 0 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "bdev_wait_for_examine" 00:18:20.876 } 00:18:20.876 ] 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "subsystem": "nbd", 00:18:20.876 "config": [] 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "subsystem": "scheduler", 00:18:20.876 "config": [ 00:18:20.876 { 00:18:20.876 "method": "framework_set_scheduler", 00:18:20.876 "params": { 00:18:20.876 "name": "static" 00:18:20.876 } 00:18:20.876 } 00:18:20.876 ] 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "subsystem": "nvmf", 00:18:20.876 "config": [ 00:18:20.876 { 00:18:20.876 "method": "nvmf_set_config", 00:18:20.876 "params": { 00:18:20.876 "discovery_filter": "match_any", 00:18:20.876 "admin_cmd_passthru": { 00:18:20.876 "identify_ctrlr": false 00:18:20.876 }, 00:18:20.876 "dhchap_digests": [ 00:18:20.876 "sha256", 00:18:20.876 "sha384", 00:18:20.876 "sha512" 00:18:20.876 ], 00:18:20.876 "dhchap_dhgroups": [ 00:18:20.876 "null", 00:18:20.876 "ffdhe2048", 00:18:20.876 "ffdhe3072", 00:18:20.876 "ffdhe4096", 00:18:20.876 "ffdhe6144", 00:18:20.876 "ffdhe8192" 00:18:20.876 ] 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "nvmf_set_max_subsystems", 00:18:20.876 "params": { 00:18:20.876 "max_subsystems": 1024 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "nvmf_set_crdt", 00:18:20.876 "params": { 00:18:20.876 "crdt1": 0, 00:18:20.876 "crdt2": 0, 00:18:20.876 "crdt3": 0 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "nvmf_create_transport", 00:18:20.876 "params": { 00:18:20.876 "trtype": "TCP", 00:18:20.876 "max_queue_depth": 128, 00:18:20.876 "max_io_qpairs_per_ctrlr": 127, 00:18:20.876 "in_capsule_data_size": 4096, 00:18:20.876 "max_io_size": 131072, 00:18:20.876 "io_unit_size": 131072, 00:18:20.876 "max_aq_depth": 128, 00:18:20.876 "num_shared_buffers": 511, 00:18:20.876 "buf_cache_size": 4294967295, 00:18:20.876 "dif_insert_or_strip": false, 00:18:20.876 "zcopy": false, 00:18:20.876 "c2h_success": false, 00:18:20.876 "sock_priority": 0, 00:18:20.876 "abort_timeout_sec": 1, 00:18:20.876 "ack_timeout": 0, 00:18:20.876 "data_wr_pool_size": 0 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "nvmf_create_subsystem", 00:18:20.876 "params": { 00:18:20.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.876 "allow_any_host": false, 00:18:20.876 "serial_number": "SPDK00000000000001", 00:18:20.876 "model_number": "SPDK bdev Controller", 00:18:20.876 "max_namespaces": 10, 00:18:20.876 "min_cntlid": 1, 00:18:20.876 "max_cntlid": 65519, 00:18:20.876 "ana_reporting": false 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "nvmf_subsystem_add_host", 00:18:20.876 "params": { 00:18:20.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.876 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.876 "psk": "key0" 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "nvmf_subsystem_add_ns", 00:18:20.876 "params": { 00:18:20.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.876 "namespace": { 00:18:20.876 "nsid": 1, 00:18:20.876 "bdev_name": "malloc0", 00:18:20.876 "nguid": "DAE1A71B6E3C459C9068B7153F4D7E1C", 00:18:20.876 "uuid": "dae1a71b-6e3c-459c-9068-b7153f4d7e1c", 00:18:20.876 "no_auto_visible": false 00:18:20.876 } 00:18:20.876 } 00:18:20.876 }, 00:18:20.876 { 00:18:20.876 "method": "nvmf_subsystem_add_listener", 00:18:20.876 "params": { 00:18:20.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.876 "listen_address": { 00:18:20.876 "trtype": "TCP", 00:18:20.876 "adrfam": "IPv4", 00:18:20.876 "traddr": "10.0.0.2", 00:18:20.876 "trsvcid": "4420" 00:18:20.876 }, 00:18:20.876 "secure_channel": true 00:18:20.876 } 00:18:20.876 } 00:18:20.876 ] 00:18:20.876 } 00:18:20.876 ] 00:18:20.876 }' 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3663999 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3663999 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3663999 ']' 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.876 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.876 [2024-11-20 18:55:43.025680] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:20.876 [2024-11-20 18:55:43.025731] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.876 [2024-11-20 18:55:43.107040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.876 [2024-11-20 18:55:43.147890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.876 [2024-11-20 18:55:43.147926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.876 [2024-11-20 18:55:43.147935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.876 [2024-11-20 18:55:43.147941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.876 [2024-11-20 18:55:43.147946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.876 [2024-11-20 18:55:43.148562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.135 [2024-11-20 18:55:43.362047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.135 [2024-11-20 18:55:43.394074] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.135 [2024-11-20 18:55:43.394295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.703 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3664184 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3664184 /var/tmp/bdevperf.sock 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3664184 ']' 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.704 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:21.704 "subsystems": [ 00:18:21.704 { 00:18:21.704 "subsystem": "keyring", 00:18:21.704 "config": [ 00:18:21.704 { 00:18:21.704 "method": "keyring_file_add_key", 00:18:21.704 "params": { 00:18:21.704 "name": "key0", 00:18:21.704 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:21.704 } 00:18:21.704 } 00:18:21.704 ] 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "subsystem": "iobuf", 00:18:21.704 "config": [ 00:18:21.704 { 00:18:21.704 "method": "iobuf_set_options", 00:18:21.704 "params": { 00:18:21.704 "small_pool_count": 8192, 00:18:21.704 "large_pool_count": 1024, 00:18:21.704 "small_bufsize": 8192, 00:18:21.704 "large_bufsize": 135168, 00:18:21.704 "enable_numa": false 00:18:21.704 } 00:18:21.704 } 00:18:21.704 ] 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "subsystem": "sock", 00:18:21.704 "config": [ 00:18:21.704 { 00:18:21.704 "method": "sock_set_default_impl", 00:18:21.704 "params": { 00:18:21.704 "impl_name": "posix" 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "sock_impl_set_options", 00:18:21.704 "params": { 00:18:21.704 "impl_name": "ssl", 00:18:21.704 "recv_buf_size": 4096, 00:18:21.704 "send_buf_size": 4096, 00:18:21.704 "enable_recv_pipe": true, 00:18:21.704 "enable_quickack": false, 00:18:21.704 "enable_placement_id": 0, 00:18:21.704 "enable_zerocopy_send_server": true, 00:18:21.704 "enable_zerocopy_send_client": false, 00:18:21.704 "zerocopy_threshold": 0, 00:18:21.704 "tls_version": 0, 00:18:21.704 "enable_ktls": false 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "sock_impl_set_options", 00:18:21.704 "params": { 00:18:21.704 "impl_name": "posix", 00:18:21.704 "recv_buf_size": 2097152, 00:18:21.704 "send_buf_size": 2097152, 00:18:21.704 "enable_recv_pipe": true, 00:18:21.704 "enable_quickack": false, 00:18:21.704 "enable_placement_id": 0, 00:18:21.704 "enable_zerocopy_send_server": true, 00:18:21.704 "enable_zerocopy_send_client": false, 00:18:21.704 "zerocopy_threshold": 0, 00:18:21.704 "tls_version": 0, 00:18:21.704 "enable_ktls": false 00:18:21.704 } 00:18:21.704 } 00:18:21.704 ] 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "subsystem": "vmd", 00:18:21.704 "config": [] 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "subsystem": "accel", 00:18:21.704 "config": [ 00:18:21.704 { 00:18:21.704 "method": "accel_set_options", 00:18:21.704 "params": { 00:18:21.704 "small_cache_size": 128, 00:18:21.704 "large_cache_size": 16, 00:18:21.704 "task_count": 2048, 00:18:21.704 "sequence_count": 2048, 00:18:21.704 "buf_count": 2048 00:18:21.704 } 00:18:21.704 } 00:18:21.704 ] 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "subsystem": "bdev", 00:18:21.704 "config": [ 00:18:21.704 { 00:18:21.704 "method": "bdev_set_options", 00:18:21.704 "params": { 00:18:21.704 "bdev_io_pool_size": 65535, 00:18:21.704 "bdev_io_cache_size": 256, 00:18:21.704 "bdev_auto_examine": true, 00:18:21.704 "iobuf_small_cache_size": 128, 00:18:21.704 "iobuf_large_cache_size": 16 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "bdev_raid_set_options", 00:18:21.704 "params": { 00:18:21.704 "process_window_size_kb": 1024, 00:18:21.704 "process_max_bandwidth_mb_sec": 0 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "bdev_iscsi_set_options", 00:18:21.704 "params": { 00:18:21.704 "timeout_sec": 30 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "bdev_nvme_set_options", 00:18:21.704 "params": { 00:18:21.704 "action_on_timeout": "none", 00:18:21.704 "timeout_us": 0, 00:18:21.704 "timeout_admin_us": 0, 00:18:21.704 "keep_alive_timeout_ms": 10000, 00:18:21.704 "arbitration_burst": 0, 00:18:21.704 "low_priority_weight": 0, 00:18:21.704 "medium_priority_weight": 0, 00:18:21.704 "high_priority_weight": 0, 00:18:21.704 "nvme_adminq_poll_period_us": 10000, 00:18:21.704 "nvme_ioq_poll_period_us": 0, 00:18:21.704 "io_queue_requests": 512, 00:18:21.704 "delay_cmd_submit": true, 00:18:21.704 "transport_retry_count": 4, 00:18:21.704 "bdev_retry_count": 3, 00:18:21.704 "transport_ack_timeout": 0, 00:18:21.704 "ctrlr_loss_timeout_sec": 0, 00:18:21.704 "reconnect_delay_sec": 0, 00:18:21.704 "fast_io_fail_timeout_sec": 0, 00:18:21.704 "disable_auto_failback": false, 00:18:21.704 "generate_uuids": false, 00:18:21.704 "transport_tos": 0, 00:18:21.704 "nvme_error_stat": false, 00:18:21.704 "rdma_srq_size": 0, 00:18:21.704 "io_path_stat": false, 00:18:21.704 "allow_accel_sequence": false, 00:18:21.704 "rdma_max_cq_size": 0, 00:18:21.704 "rdma_cm_event_timeout_ms": 0, 00:18:21.704 "dhchap_digests": [ 00:18:21.704 "sha256", 00:18:21.704 "sha384", 00:18:21.704 "sha512" 00:18:21.704 ], 00:18:21.704 "dhchap_dhgroups": [ 00:18:21.704 "null", 00:18:21.704 "ffdhe2048", 00:18:21.704 "ffdhe3072", 00:18:21.704 "ffdhe4096", 00:18:21.704 "ffdhe6144", 00:18:21.704 "ffdhe8192" 00:18:21.704 ] 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "bdev_nvme_attach_controller", 00:18:21.704 "params": { 00:18:21.704 "name": "TLSTEST", 00:18:21.704 "trtype": "TCP", 00:18:21.704 "adrfam": "IPv4", 00:18:21.704 "traddr": "10.0.0.2", 00:18:21.704 "trsvcid": "4420", 00:18:21.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.704 "prchk_reftag": false, 00:18:21.704 "prchk_guard": false, 00:18:21.704 "ctrlr_loss_timeout_sec": 0, 00:18:21.704 "reconnect_delay_sec": 0, 00:18:21.704 "fast_io_fail_timeout_sec": 0, 00:18:21.704 "psk": "key0", 00:18:21.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.704 "hdgst": false, 00:18:21.704 "ddgst": false, 00:18:21.704 "multipath": "multipath" 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "bdev_nvme_set_hotplug", 00:18:21.704 "params": { 00:18:21.704 "period_us": 100000, 00:18:21.704 "enable": false 00:18:21.704 } 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "method": "bdev_wait_for_examine" 00:18:21.704 } 00:18:21.704 ] 00:18:21.704 }, 00:18:21.704 { 00:18:21.704 "subsystem": "nbd", 00:18:21.704 "config": [] 00:18:21.704 } 00:18:21.704 ] 00:18:21.704 }' 00:18:21.705 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.705 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.705 [2024-11-20 18:55:43.948237] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:21.705 [2024-11-20 18:55:43.948286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664184 ] 00:18:21.705 [2024-11-20 18:55:44.023703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.964 [2024-11-20 18:55:44.064088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.964 [2024-11-20 18:55:44.217244] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.531 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.531 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.531 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:22.790 Running I/O for 10 seconds... 00:18:24.662 5489.00 IOPS, 21.44 MiB/s [2024-11-20T17:55:47.922Z] 5524.00 IOPS, 21.58 MiB/s [2024-11-20T17:55:49.299Z] 5521.33 IOPS, 21.57 MiB/s [2024-11-20T17:55:50.233Z] 5547.25 IOPS, 21.67 MiB/s [2024-11-20T17:55:51.169Z] 5541.40 IOPS, 21.65 MiB/s [2024-11-20T17:55:52.105Z] 5544.17 IOPS, 21.66 MiB/s [2024-11-20T17:55:53.042Z] 5557.14 IOPS, 21.71 MiB/s [2024-11-20T17:55:53.979Z] 5568.50 IOPS, 21.75 MiB/s [2024-11-20T17:55:54.915Z] 5530.11 IOPS, 21.60 MiB/s [2024-11-20T17:55:54.915Z] 5536.60 IOPS, 21.63 MiB/s 00:18:32.590 Latency(us) 00:18:32.590 [2024-11-20T17:55:54.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.590 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:32.590 Verification LBA range: start 0x0 length 0x2000 00:18:32.590 TLSTESTn1 : 10.01 5541.14 21.65 0.00 0.00 23066.31 5960.66 22469.49 00:18:32.590 [2024-11-20T17:55:54.915Z] =================================================================================================================== 00:18:32.590 [2024-11-20T17:55:54.915Z] Total : 5541.14 21.65 0.00 0.00 23066.31 5960.66 22469.49 00:18:32.590 { 00:18:32.590 "results": [ 00:18:32.590 { 00:18:32.590 "job": "TLSTESTn1", 00:18:32.590 "core_mask": "0x4", 00:18:32.590 "workload": "verify", 00:18:32.590 "status": "finished", 00:18:32.590 "verify_range": { 00:18:32.590 "start": 0, 00:18:32.590 "length": 8192 00:18:32.590 }, 00:18:32.590 "queue_depth": 128, 00:18:32.590 "io_size": 4096, 00:18:32.590 "runtime": 10.014541, 00:18:32.590 "iops": 5541.1426245097, 00:18:32.590 "mibps": 21.645088376991016, 00:18:32.590 "io_failed": 0, 00:18:32.590 "io_timeout": 0, 00:18:32.590 "avg_latency_us": 23066.311182873207, 00:18:32.590 "min_latency_us": 5960.655238095238, 00:18:32.590 "max_latency_us": 22469.485714285714 00:18:32.590 } 00:18:32.590 ], 00:18:32.590 "core_count": 1 00:18:32.590 } 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3664184 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3664184 ']' 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3664184 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3664184 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3664184' 00:18:32.849 killing process with pid 3664184 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3664184 00:18:32.849 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.849 00:18:32.849 Latency(us) 00:18:32.849 [2024-11-20T17:55:55.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.849 [2024-11-20T17:55:55.174Z] =================================================================================================================== 00:18:32.849 [2024-11-20T17:55:55.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.849 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3664184 00:18:32.849 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3663999 00:18:32.849 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3663999 ']' 00:18:32.849 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3663999 00:18:32.849 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.849 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.849 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663999 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663999' 00:18:33.108 killing process with pid 3663999 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3663999 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3663999 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3666027 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3666027 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3666027 ']' 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.108 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.108 [2024-11-20 18:55:55.425758] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:33.108 [2024-11-20 18:55:55.425804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.367 [2024-11-20 18:55:55.501226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.367 [2024-11-20 18:55:55.541363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.367 [2024-11-20 18:55:55.541400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.367 [2024-11-20 18:55:55.541407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.367 [2024-11-20 18:55:55.541413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.367 [2024-11-20 18:55:55.541418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.367 [2024-11-20 18:55:55.541999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.bkUCAVdjY3 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.bkUCAVdjY3 00:18:33.367 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.626 [2024-11-20 18:55:55.854194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.626 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.885 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.144 [2024-11-20 18:55:56.219127] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.144 [2024-11-20 18:55:56.219347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.144 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.144 malloc0 00:18:34.144 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.403 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3666297 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3666297 /var/tmp/bdevperf.sock 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3666297 ']' 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.662 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.921 [2024-11-20 18:55:57.025253] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:34.921 [2024-11-20 18:55:57.025301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666297 ] 00:18:34.921 [2024-11-20 18:55:57.096733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.921 [2024-11-20 18:55:57.139925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.921 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.921 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.921 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:35.179 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:35.438 [2024-11-20 18:55:57.600696] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.438 nvme0n1 00:18:35.438 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.697 Running I/O for 1 seconds... 00:18:36.633 5286.00 IOPS, 20.65 MiB/s 00:18:36.633 Latency(us) 00:18:36.633 [2024-11-20T17:55:58.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.633 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:36.633 Verification LBA range: start 0x0 length 0x2000 00:18:36.633 nvme0n1 : 1.01 5339.05 20.86 0.00 0.00 23819.50 4556.31 47934.90 00:18:36.633 [2024-11-20T17:55:58.958Z] =================================================================================================================== 00:18:36.633 [2024-11-20T17:55:58.958Z] Total : 5339.05 20.86 0.00 0.00 23819.50 4556.31 47934.90 00:18:36.633 { 00:18:36.633 "results": [ 00:18:36.633 { 00:18:36.633 "job": "nvme0n1", 00:18:36.633 "core_mask": "0x2", 00:18:36.633 "workload": "verify", 00:18:36.633 "status": "finished", 00:18:36.633 "verify_range": { 00:18:36.633 "start": 0, 00:18:36.633 "length": 8192 00:18:36.633 }, 00:18:36.633 "queue_depth": 128, 00:18:36.633 "io_size": 4096, 00:18:36.633 "runtime": 1.014039, 00:18:36.633 "iops": 5339.045145206446, 00:18:36.633 "mibps": 20.85564509846268, 00:18:36.633 "io_failed": 0, 00:18:36.633 "io_timeout": 0, 00:18:36.633 "avg_latency_us": 23819.500568191812, 00:18:36.633 "min_latency_us": 4556.312380952381, 00:18:36.633 "max_latency_us": 47934.90285714286 00:18:36.633 } 00:18:36.633 ], 00:18:36.633 "core_count": 1 00:18:36.633 } 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3666297 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3666297 ']' 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3666297 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666297 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666297' 00:18:36.633 killing process with pid 3666297 00:18:36.633 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3666297 00:18:36.634 Received shutdown signal, test time was about 1.000000 seconds 00:18:36.634 00:18:36.634 Latency(us) 00:18:36.634 [2024-11-20T17:55:58.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.634 [2024-11-20T17:55:58.959Z] =================================================================================================================== 00:18:36.634 [2024-11-20T17:55:58.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.634 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3666297 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3666027 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3666027 ']' 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3666027 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666027 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666027' 00:18:36.893 killing process with pid 3666027 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3666027 00:18:36.893 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3666027 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3666754 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3666754 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3666754 ']' 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.151 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.151 [2024-11-20 18:55:59.306214] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:37.151 [2024-11-20 18:55:59.306259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.151 [2024-11-20 18:55:59.368482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.151 [2024-11-20 18:55:59.406841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.151 [2024-11-20 18:55:59.406878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.151 [2024-11-20 18:55:59.406886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.151 [2024-11-20 18:55:59.406892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.151 [2024-11-20 18:55:59.406897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.151 [2024-11-20 18:55:59.407461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.410 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.410 [2024-11-20 18:55:59.550900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.410 malloc0 00:18:37.410 [2024-11-20 18:55:59.578860] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.411 [2024-11-20 18:55:59.579067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3666773 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3666773 /var/tmp/bdevperf.sock 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3666773 ']' 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.411 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.411 [2024-11-20 18:55:59.652915] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:37.411 [2024-11-20 18:55:59.652954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666773 ] 00:18:37.411 [2024-11-20 18:55:59.725158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.670 [2024-11-20 18:55:59.767775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.670 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.670 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.670 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bkUCAVdjY3 00:18:37.943 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:37.943 [2024-11-20 18:56:00.221419] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.209 nvme0n1 00:18:38.209 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.209 Running I/O for 1 seconds... 00:18:39.146 5272.00 IOPS, 20.59 MiB/s 00:18:39.146 Latency(us) 00:18:39.146 [2024-11-20T17:56:01.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.146 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.146 Verification LBA range: start 0x0 length 0x2000 00:18:39.146 nvme0n1 : 1.01 5330.98 20.82 0.00 0.00 23852.41 5274.09 52678.46 00:18:39.146 [2024-11-20T17:56:01.471Z] =================================================================================================================== 00:18:39.146 [2024-11-20T17:56:01.471Z] Total : 5330.98 20.82 0.00 0.00 23852.41 5274.09 52678.46 00:18:39.146 { 00:18:39.146 "results": [ 00:18:39.146 { 00:18:39.146 "job": "nvme0n1", 00:18:39.146 "core_mask": "0x2", 00:18:39.146 "workload": "verify", 00:18:39.146 "status": "finished", 00:18:39.146 "verify_range": { 00:18:39.146 "start": 0, 00:18:39.146 "length": 8192 00:18:39.146 }, 00:18:39.146 "queue_depth": 128, 00:18:39.146 "io_size": 4096, 00:18:39.146 "runtime": 1.013134, 00:18:39.146 "iops": 5330.982870972645, 00:18:39.146 "mibps": 20.824151839736896, 00:18:39.146 "io_failed": 0, 00:18:39.146 "io_timeout": 0, 00:18:39.146 "avg_latency_us": 23852.411974502076, 00:18:39.146 "min_latency_us": 5274.087619047619, 00:18:39.146 "max_latency_us": 52678.460952380956 00:18:39.146 } 00:18:39.146 ], 00:18:39.146 "core_count": 1 00:18:39.146 } 00:18:39.146 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:39.146 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.146 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.406 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.406 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:39.406 "subsystems": [ 00:18:39.406 { 00:18:39.406 "subsystem": "keyring", 00:18:39.406 "config": [ 00:18:39.406 { 00:18:39.406 "method": "keyring_file_add_key", 00:18:39.406 "params": { 00:18:39.406 "name": "key0", 00:18:39.406 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:39.406 } 00:18:39.406 } 00:18:39.406 ] 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "subsystem": "iobuf", 00:18:39.406 "config": [ 00:18:39.406 { 00:18:39.406 "method": "iobuf_set_options", 00:18:39.406 "params": { 00:18:39.406 "small_pool_count": 8192, 00:18:39.406 "large_pool_count": 1024, 00:18:39.406 "small_bufsize": 8192, 00:18:39.406 "large_bufsize": 135168, 00:18:39.406 "enable_numa": false 00:18:39.406 } 00:18:39.406 } 00:18:39.406 ] 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "subsystem": "sock", 00:18:39.406 "config": [ 00:18:39.406 { 00:18:39.406 "method": "sock_set_default_impl", 00:18:39.406 "params": { 00:18:39.406 "impl_name": "posix" 00:18:39.406 } 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "method": "sock_impl_set_options", 00:18:39.406 "params": { 00:18:39.406 "impl_name": "ssl", 00:18:39.406 "recv_buf_size": 4096, 00:18:39.406 "send_buf_size": 4096, 00:18:39.406 "enable_recv_pipe": true, 00:18:39.406 "enable_quickack": false, 00:18:39.406 "enable_placement_id": 0, 00:18:39.406 "enable_zerocopy_send_server": true, 00:18:39.406 "enable_zerocopy_send_client": false, 00:18:39.406 "zerocopy_threshold": 0, 00:18:39.406 "tls_version": 0, 00:18:39.406 "enable_ktls": false 00:18:39.406 } 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "method": "sock_impl_set_options", 00:18:39.406 "params": { 00:18:39.406 "impl_name": "posix", 00:18:39.406 "recv_buf_size": 2097152, 00:18:39.406 "send_buf_size": 2097152, 00:18:39.406 "enable_recv_pipe": true, 00:18:39.406 "enable_quickack": false, 00:18:39.406 "enable_placement_id": 0, 00:18:39.406 "enable_zerocopy_send_server": true, 00:18:39.406 "enable_zerocopy_send_client": false, 00:18:39.406 "zerocopy_threshold": 0, 00:18:39.406 "tls_version": 0, 00:18:39.406 "enable_ktls": false 00:18:39.406 } 00:18:39.406 } 00:18:39.406 ] 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "subsystem": "vmd", 00:18:39.406 "config": [] 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "subsystem": "accel", 00:18:39.406 "config": [ 00:18:39.406 { 00:18:39.406 "method": "accel_set_options", 00:18:39.406 "params": { 00:18:39.406 "small_cache_size": 128, 00:18:39.406 "large_cache_size": 16, 00:18:39.406 "task_count": 2048, 00:18:39.406 "sequence_count": 2048, 00:18:39.406 "buf_count": 2048 00:18:39.406 } 00:18:39.406 } 00:18:39.406 ] 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "subsystem": "bdev", 00:18:39.406 "config": [ 00:18:39.406 { 00:18:39.406 "method": "bdev_set_options", 00:18:39.406 "params": { 00:18:39.406 "bdev_io_pool_size": 65535, 00:18:39.406 "bdev_io_cache_size": 256, 00:18:39.406 "bdev_auto_examine": true, 00:18:39.406 "iobuf_small_cache_size": 128, 00:18:39.406 "iobuf_large_cache_size": 16 00:18:39.406 } 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "method": "bdev_raid_set_options", 00:18:39.406 "params": { 00:18:39.406 "process_window_size_kb": 1024, 00:18:39.406 "process_max_bandwidth_mb_sec": 0 00:18:39.406 } 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "method": "bdev_iscsi_set_options", 00:18:39.406 "params": { 00:18:39.406 "timeout_sec": 30 00:18:39.406 } 00:18:39.406 }, 00:18:39.406 { 00:18:39.406 "method": "bdev_nvme_set_options", 00:18:39.406 "params": { 00:18:39.406 "action_on_timeout": "none", 00:18:39.406 "timeout_us": 0, 00:18:39.406 "timeout_admin_us": 0, 00:18:39.406 "keep_alive_timeout_ms": 10000, 00:18:39.406 "arbitration_burst": 0, 00:18:39.406 "low_priority_weight": 0, 00:18:39.406 "medium_priority_weight": 0, 00:18:39.406 "high_priority_weight": 0, 00:18:39.406 "nvme_adminq_poll_period_us": 10000, 00:18:39.406 "nvme_ioq_poll_period_us": 0, 00:18:39.406 "io_queue_requests": 0, 00:18:39.406 "delay_cmd_submit": true, 00:18:39.406 "transport_retry_count": 4, 00:18:39.406 "bdev_retry_count": 3, 00:18:39.406 "transport_ack_timeout": 0, 00:18:39.406 "ctrlr_loss_timeout_sec": 0, 00:18:39.406 "reconnect_delay_sec": 0, 00:18:39.406 "fast_io_fail_timeout_sec": 0, 00:18:39.406 "disable_auto_failback": false, 00:18:39.406 "generate_uuids": false, 00:18:39.406 "transport_tos": 0, 00:18:39.406 "nvme_error_stat": false, 00:18:39.406 "rdma_srq_size": 0, 00:18:39.406 "io_path_stat": false, 00:18:39.406 "allow_accel_sequence": false, 00:18:39.406 "rdma_max_cq_size": 0, 00:18:39.406 "rdma_cm_event_timeout_ms": 0, 00:18:39.406 "dhchap_digests": [ 00:18:39.406 "sha256", 00:18:39.406 "sha384", 00:18:39.406 "sha512" 00:18:39.406 ], 00:18:39.406 "dhchap_dhgroups": [ 00:18:39.406 "null", 00:18:39.406 "ffdhe2048", 00:18:39.406 "ffdhe3072", 00:18:39.406 "ffdhe4096", 00:18:39.406 "ffdhe6144", 00:18:39.406 "ffdhe8192" 00:18:39.406 ] 00:18:39.406 } 00:18:39.406 }, 00:18:39.407 { 00:18:39.407 "method": "bdev_nvme_set_hotplug", 00:18:39.407 "params": { 00:18:39.407 "period_us": 100000, 00:18:39.407 "enable": false 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "bdev_malloc_create", 00:18:39.407 "params": { 00:18:39.407 "name": "malloc0", 00:18:39.407 "num_blocks": 8192, 00:18:39.407 "block_size": 4096, 00:18:39.407 "physical_block_size": 4096, 00:18:39.407 "uuid": "0af0e696-ef3a-4ca9-bf16-1835ac438ab8", 00:18:39.407 "optimal_io_boundary": 0, 00:18:39.407 "md_size": 0, 00:18:39.407 "dif_type": 0, 00:18:39.407 "dif_is_head_of_md": false, 00:18:39.407 "dif_pi_format": 0 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "bdev_wait_for_examine" 00:18:39.407 } 00:18:39.407 ] 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "subsystem": "nbd", 00:18:39.407 "config": [] 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "subsystem": "scheduler", 00:18:39.407 "config": [ 00:18:39.407 { 00:18:39.407 "method": "framework_set_scheduler", 00:18:39.407 "params": { 00:18:39.407 "name": "static" 00:18:39.407 } 00:18:39.407 } 00:18:39.407 ] 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "subsystem": "nvmf", 00:18:39.407 "config": [ 00:18:39.407 { 00:18:39.407 "method": "nvmf_set_config", 00:18:39.407 "params": { 00:18:39.407 "discovery_filter": "match_any", 00:18:39.407 "admin_cmd_passthru": { 00:18:39.407 "identify_ctrlr": false 00:18:39.407 }, 00:18:39.407 "dhchap_digests": [ 00:18:39.407 "sha256", 00:18:39.407 "sha384", 00:18:39.407 "sha512" 00:18:39.407 ], 00:18:39.407 "dhchap_dhgroups": [ 00:18:39.407 "null", 00:18:39.407 "ffdhe2048", 00:18:39.407 "ffdhe3072", 00:18:39.407 "ffdhe4096", 00:18:39.407 "ffdhe6144", 00:18:39.407 "ffdhe8192" 00:18:39.407 ] 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "nvmf_set_max_subsystems", 00:18:39.407 "params": { 00:18:39.407 "max_subsystems": 1024 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "nvmf_set_crdt", 00:18:39.407 "params": { 00:18:39.407 "crdt1": 0, 00:18:39.407 "crdt2": 0, 00:18:39.407 "crdt3": 0 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "nvmf_create_transport", 00:18:39.407 "params": { 00:18:39.407 "trtype": "TCP", 00:18:39.407 "max_queue_depth": 128, 00:18:39.407 "max_io_qpairs_per_ctrlr": 127, 00:18:39.407 "in_capsule_data_size": 4096, 00:18:39.407 "max_io_size": 131072, 00:18:39.407 "io_unit_size": 131072, 00:18:39.407 "max_aq_depth": 128, 00:18:39.407 "num_shared_buffers": 511, 00:18:39.407 "buf_cache_size": 4294967295, 00:18:39.407 "dif_insert_or_strip": false, 00:18:39.407 "zcopy": false, 00:18:39.407 "c2h_success": false, 00:18:39.407 "sock_priority": 0, 00:18:39.407 "abort_timeout_sec": 1, 00:18:39.407 "ack_timeout": 0, 00:18:39.407 "data_wr_pool_size": 0 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "nvmf_create_subsystem", 00:18:39.407 "params": { 00:18:39.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.407 "allow_any_host": false, 00:18:39.407 "serial_number": "00000000000000000000", 00:18:39.407 "model_number": "SPDK bdev Controller", 00:18:39.407 "max_namespaces": 32, 00:18:39.407 "min_cntlid": 1, 00:18:39.407 "max_cntlid": 65519, 00:18:39.407 "ana_reporting": false 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "nvmf_subsystem_add_host", 00:18:39.407 "params": { 00:18:39.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.407 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.407 "psk": "key0" 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "nvmf_subsystem_add_ns", 00:18:39.407 "params": { 00:18:39.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.407 "namespace": { 00:18:39.407 "nsid": 1, 00:18:39.407 "bdev_name": "malloc0", 00:18:39.407 "nguid": "0AF0E696EF3A4CA9BF161835AC438AB8", 00:18:39.407 "uuid": "0af0e696-ef3a-4ca9-bf16-1835ac438ab8", 00:18:39.407 "no_auto_visible": false 00:18:39.407 } 00:18:39.407 } 00:18:39.407 }, 00:18:39.407 { 00:18:39.407 "method": "nvmf_subsystem_add_listener", 00:18:39.407 "params": { 00:18:39.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.407 "listen_address": { 00:18:39.407 "trtype": "TCP", 00:18:39.407 "adrfam": "IPv4", 00:18:39.407 "traddr": "10.0.0.2", 00:18:39.407 "trsvcid": "4420" 00:18:39.407 }, 00:18:39.407 "secure_channel": false, 00:18:39.407 "sock_impl": "ssl" 00:18:39.407 } 00:18:39.407 } 00:18:39.407 ] 00:18:39.407 } 00:18:39.407 ] 00:18:39.407 }' 00:18:39.407 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:39.667 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:39.667 "subsystems": [ 00:18:39.667 { 00:18:39.667 "subsystem": "keyring", 00:18:39.667 "config": [ 00:18:39.667 { 00:18:39.667 "method": "keyring_file_add_key", 00:18:39.667 "params": { 00:18:39.667 "name": "key0", 00:18:39.667 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:39.667 } 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "iobuf", 00:18:39.667 "config": [ 00:18:39.667 { 00:18:39.667 "method": "iobuf_set_options", 00:18:39.667 "params": { 00:18:39.667 "small_pool_count": 8192, 00:18:39.667 "large_pool_count": 1024, 00:18:39.667 "small_bufsize": 8192, 00:18:39.667 "large_bufsize": 135168, 00:18:39.667 "enable_numa": false 00:18:39.667 } 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "sock", 00:18:39.667 "config": [ 00:18:39.667 { 00:18:39.667 "method": "sock_set_default_impl", 00:18:39.667 "params": { 00:18:39.667 "impl_name": "posix" 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "sock_impl_set_options", 00:18:39.667 "params": { 00:18:39.667 "impl_name": "ssl", 00:18:39.667 "recv_buf_size": 4096, 00:18:39.667 "send_buf_size": 4096, 00:18:39.667 "enable_recv_pipe": true, 00:18:39.667 "enable_quickack": false, 00:18:39.667 "enable_placement_id": 0, 00:18:39.667 "enable_zerocopy_send_server": true, 00:18:39.667 "enable_zerocopy_send_client": false, 00:18:39.667 "zerocopy_threshold": 0, 00:18:39.667 "tls_version": 0, 00:18:39.667 "enable_ktls": false 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "sock_impl_set_options", 00:18:39.667 "params": { 00:18:39.667 "impl_name": "posix", 00:18:39.667 "recv_buf_size": 2097152, 00:18:39.667 "send_buf_size": 2097152, 00:18:39.667 "enable_recv_pipe": true, 00:18:39.667 "enable_quickack": false, 00:18:39.667 "enable_placement_id": 0, 00:18:39.667 "enable_zerocopy_send_server": true, 00:18:39.667 "enable_zerocopy_send_client": false, 00:18:39.667 "zerocopy_threshold": 0, 00:18:39.667 "tls_version": 0, 00:18:39.667 "enable_ktls": false 00:18:39.667 } 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "vmd", 00:18:39.667 "config": [] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "accel", 00:18:39.667 "config": [ 00:18:39.667 { 00:18:39.667 "method": "accel_set_options", 00:18:39.667 "params": { 00:18:39.667 "small_cache_size": 128, 00:18:39.667 "large_cache_size": 16, 00:18:39.667 "task_count": 2048, 00:18:39.667 "sequence_count": 2048, 00:18:39.667 "buf_count": 2048 00:18:39.667 } 00:18:39.667 } 00:18:39.667 ] 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "subsystem": "bdev", 00:18:39.667 "config": [ 00:18:39.667 { 00:18:39.667 "method": "bdev_set_options", 00:18:39.667 "params": { 00:18:39.667 "bdev_io_pool_size": 65535, 00:18:39.667 "bdev_io_cache_size": 256, 00:18:39.667 "bdev_auto_examine": true, 00:18:39.667 "iobuf_small_cache_size": 128, 00:18:39.667 "iobuf_large_cache_size": 16 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "bdev_raid_set_options", 00:18:39.667 "params": { 00:18:39.667 "process_window_size_kb": 1024, 00:18:39.667 "process_max_bandwidth_mb_sec": 0 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "bdev_iscsi_set_options", 00:18:39.667 "params": { 00:18:39.667 "timeout_sec": 30 00:18:39.667 } 00:18:39.667 }, 00:18:39.667 { 00:18:39.667 "method": "bdev_nvme_set_options", 00:18:39.667 "params": { 00:18:39.667 "action_on_timeout": "none", 00:18:39.667 "timeout_us": 0, 00:18:39.667 "timeout_admin_us": 0, 00:18:39.667 "keep_alive_timeout_ms": 10000, 00:18:39.667 "arbitration_burst": 0, 00:18:39.667 "low_priority_weight": 0, 00:18:39.667 "medium_priority_weight": 0, 00:18:39.667 "high_priority_weight": 0, 00:18:39.667 "nvme_adminq_poll_period_us": 10000, 00:18:39.667 "nvme_ioq_poll_period_us": 0, 00:18:39.667 "io_queue_requests": 512, 00:18:39.667 "delay_cmd_submit": true, 00:18:39.667 "transport_retry_count": 4, 00:18:39.667 "bdev_retry_count": 3, 00:18:39.667 "transport_ack_timeout": 0, 00:18:39.667 "ctrlr_loss_timeout_sec": 0, 00:18:39.667 "reconnect_delay_sec": 0, 00:18:39.667 "fast_io_fail_timeout_sec": 0, 00:18:39.667 "disable_auto_failback": false, 00:18:39.667 "generate_uuids": false, 00:18:39.667 "transport_tos": 0, 00:18:39.667 "nvme_error_stat": false, 00:18:39.667 "rdma_srq_size": 0, 00:18:39.667 "io_path_stat": false, 00:18:39.668 "allow_accel_sequence": false, 00:18:39.668 "rdma_max_cq_size": 0, 00:18:39.668 "rdma_cm_event_timeout_ms": 0, 00:18:39.668 "dhchap_digests": [ 00:18:39.668 "sha256", 00:18:39.668 "sha384", 00:18:39.668 "sha512" 00:18:39.668 ], 00:18:39.668 "dhchap_dhgroups": [ 00:18:39.668 "null", 00:18:39.668 "ffdhe2048", 00:18:39.668 "ffdhe3072", 00:18:39.668 "ffdhe4096", 00:18:39.668 "ffdhe6144", 00:18:39.668 "ffdhe8192" 00:18:39.668 ] 00:18:39.668 } 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "method": "bdev_nvme_attach_controller", 00:18:39.668 "params": { 00:18:39.668 "name": "nvme0", 00:18:39.668 "trtype": "TCP", 00:18:39.668 "adrfam": "IPv4", 00:18:39.668 "traddr": "10.0.0.2", 00:18:39.668 "trsvcid": "4420", 00:18:39.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.668 "prchk_reftag": false, 00:18:39.668 "prchk_guard": false, 00:18:39.668 "ctrlr_loss_timeout_sec": 0, 00:18:39.668 "reconnect_delay_sec": 0, 00:18:39.668 "fast_io_fail_timeout_sec": 0, 00:18:39.668 "psk": "key0", 00:18:39.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.668 "hdgst": false, 00:18:39.668 "ddgst": false, 00:18:39.668 "multipath": "multipath" 00:18:39.668 } 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "method": "bdev_nvme_set_hotplug", 00:18:39.668 "params": { 00:18:39.668 "period_us": 100000, 00:18:39.668 "enable": false 00:18:39.668 } 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "method": "bdev_enable_histogram", 00:18:39.668 "params": { 00:18:39.668 "name": "nvme0n1", 00:18:39.668 "enable": true 00:18:39.668 } 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "method": "bdev_wait_for_examine" 00:18:39.668 } 00:18:39.668 ] 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "subsystem": "nbd", 00:18:39.668 "config": [] 00:18:39.668 } 00:18:39.668 ] 00:18:39.668 }' 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3666773 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3666773 ']' 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3666773 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666773 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666773' 00:18:39.668 killing process with pid 3666773 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3666773 00:18:39.668 Received shutdown signal, test time was about 1.000000 seconds 00:18:39.668 00:18:39.668 Latency(us) 00:18:39.668 [2024-11-20T17:56:01.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.668 [2024-11-20T17:56:01.993Z] =================================================================================================================== 00:18:39.668 [2024-11-20T17:56:01.993Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.668 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3666773 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3666754 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3666754 ']' 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3666754 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666754 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666754' 00:18:39.928 killing process with pid 3666754 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3666754 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3666754 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.928 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:39.928 "subsystems": [ 00:18:39.928 { 00:18:39.928 "subsystem": "keyring", 00:18:39.928 "config": [ 00:18:39.928 { 00:18:39.928 "method": "keyring_file_add_key", 00:18:39.928 "params": { 00:18:39.928 "name": "key0", 00:18:39.929 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:39.929 } 00:18:39.929 } 00:18:39.929 ] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "iobuf", 00:18:39.929 "config": [ 00:18:39.929 { 00:18:39.929 "method": "iobuf_set_options", 00:18:39.929 "params": { 00:18:39.929 "small_pool_count": 8192, 00:18:39.929 "large_pool_count": 1024, 00:18:39.929 "small_bufsize": 8192, 00:18:39.929 "large_bufsize": 135168, 00:18:39.929 "enable_numa": false 00:18:39.929 } 00:18:39.929 } 00:18:39.929 ] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "sock", 00:18:39.929 "config": [ 00:18:39.929 { 00:18:39.929 "method": "sock_set_default_impl", 00:18:39.929 "params": { 00:18:39.929 "impl_name": "posix" 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "sock_impl_set_options", 00:18:39.929 "params": { 00:18:39.929 "impl_name": "ssl", 00:18:39.929 "recv_buf_size": 4096, 00:18:39.929 "send_buf_size": 4096, 00:18:39.929 "enable_recv_pipe": true, 00:18:39.929 "enable_quickack": false, 00:18:39.929 "enable_placement_id": 0, 00:18:39.929 "enable_zerocopy_send_server": true, 00:18:39.929 "enable_zerocopy_send_client": false, 00:18:39.929 "zerocopy_threshold": 0, 00:18:39.929 "tls_version": 0, 00:18:39.929 "enable_ktls": false 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "sock_impl_set_options", 00:18:39.929 "params": { 00:18:39.929 "impl_name": "posix", 00:18:39.929 "recv_buf_size": 2097152, 00:18:39.929 "send_buf_size": 2097152, 00:18:39.929 "enable_recv_pipe": true, 00:18:39.929 "enable_quickack": false, 00:18:39.929 "enable_placement_id": 0, 00:18:39.929 "enable_zerocopy_send_server": true, 00:18:39.929 "enable_zerocopy_send_client": false, 00:18:39.929 "zerocopy_threshold": 0, 00:18:39.929 "tls_version": 0, 00:18:39.929 "enable_ktls": false 00:18:39.929 } 00:18:39.929 } 00:18:39.929 ] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "vmd", 00:18:39.929 "config": [] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "accel", 00:18:39.929 "config": [ 00:18:39.929 { 00:18:39.929 "method": "accel_set_options", 00:18:39.929 "params": { 00:18:39.929 "small_cache_size": 128, 00:18:39.929 "large_cache_size": 16, 00:18:39.929 "task_count": 2048, 00:18:39.929 "sequence_count": 2048, 00:18:39.929 "buf_count": 2048 00:18:39.929 } 00:18:39.929 } 00:18:39.929 ] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "bdev", 00:18:39.929 "config": [ 00:18:39.929 { 00:18:39.929 "method": "bdev_set_options", 00:18:39.929 "params": { 00:18:39.929 "bdev_io_pool_size": 65535, 00:18:39.929 "bdev_io_cache_size": 256, 00:18:39.929 "bdev_auto_examine": true, 00:18:39.929 "iobuf_small_cache_size": 128, 00:18:39.929 "iobuf_large_cache_size": 16 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "bdev_raid_set_options", 00:18:39.929 "params": { 00:18:39.929 "process_window_size_kb": 1024, 00:18:39.929 "process_max_bandwidth_mb_sec": 0 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "bdev_iscsi_set_options", 00:18:39.929 "params": { 00:18:39.929 "timeout_sec": 30 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "bdev_nvme_set_options", 00:18:39.929 "params": { 00:18:39.929 "action_on_timeout": "none", 00:18:39.929 "timeout_us": 0, 00:18:39.929 "timeout_admin_us": 0, 00:18:39.929 "keep_alive_timeout_ms": 10000, 00:18:39.929 "arbitration_burst": 0, 00:18:39.929 "low_priority_weight": 0, 00:18:39.929 "medium_priority_weight": 0, 00:18:39.929 "high_priority_weight": 0, 00:18:39.929 "nvme_adminq_poll_period_us": 10000, 00:18:39.929 "nvme_ioq_poll_period_us": 0, 00:18:39.929 "io_queue_requests": 0, 00:18:39.929 "delay_cmd_submit": true, 00:18:39.929 "transport_retry_count": 4, 00:18:39.929 "bdev_retry_count": 3, 00:18:39.929 "transport_ack_timeout": 0, 00:18:39.929 "ctrlr_loss_timeout_sec": 0, 00:18:39.929 "reconnect_delay_sec": 0, 00:18:39.929 "fast_io_fail_timeout_sec": 0, 00:18:39.929 "disable_auto_failback": false, 00:18:39.929 "generate_uuids": false, 00:18:39.929 "transport_tos": 0, 00:18:39.929 "nvme_error_stat": false, 00:18:39.929 "rdma_srq_size": 0, 00:18:39.929 "io_path_stat": false, 00:18:39.929 "allow_accel_sequence": false, 00:18:39.929 "rdma_max_cq_size": 0, 00:18:39.929 "rdma_cm_event_timeout_ms": 0, 00:18:39.929 "dhchap_digests": [ 00:18:39.929 "sha256", 00:18:39.929 "sha384", 00:18:39.929 "sha512" 00:18:39.929 ], 00:18:39.929 "dhchap_dhgroups": [ 00:18:39.929 "null", 00:18:39.929 "ffdhe2048", 00:18:39.929 "ffdhe3072", 00:18:39.929 "ffdhe4096", 00:18:39.929 "ffdhe6144", 00:18:39.929 "ffdhe8192" 00:18:39.929 ] 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "bdev_nvme_set_hotplug", 00:18:39.929 "params": { 00:18:39.929 "period_us": 100000, 00:18:39.929 "enable": false 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "bdev_malloc_create", 00:18:39.929 "params": { 00:18:39.929 "name": "malloc0", 00:18:39.929 "num_blocks": 8192, 00:18:39.929 "block_size": 4096, 00:18:39.929 "physical_block_size": 4096, 00:18:39.929 "uuid": "0af0e696-ef3a-4ca9-bf16-1835ac438ab8", 00:18:39.929 "optimal_io_boundary": 0, 00:18:39.929 "md_size": 0, 00:18:39.929 "dif_type": 0, 00:18:39.929 "dif_is_head_of_md": false, 00:18:39.929 "dif_pi_format": 0 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "bdev_wait_for_examine" 00:18:39.929 } 00:18:39.929 ] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "nbd", 00:18:39.929 "config": [] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "scheduler", 00:18:39.929 "config": [ 00:18:39.929 { 00:18:39.929 "method": "framework_set_scheduler", 00:18:39.929 "params": { 00:18:39.929 "name": "static" 00:18:39.929 } 00:18:39.929 } 00:18:39.929 ] 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "subsystem": "nvmf", 00:18:39.929 "config": [ 00:18:39.929 { 00:18:39.929 "method": "nvmf_set_config", 00:18:39.929 "params": { 00:18:39.929 "discovery_filter": "match_any", 00:18:39.929 "admin_cmd_passthru": { 00:18:39.929 "identify_ctrlr": false 00:18:39.929 }, 00:18:39.929 "dhchap_digests": [ 00:18:39.929 "sha256", 00:18:39.929 "sha384", 00:18:39.929 "sha512" 00:18:39.929 ], 00:18:39.929 "dhchap_dhgroups": [ 00:18:39.929 "null", 00:18:39.929 "ffdhe2048", 00:18:39.929 "ffdhe3072", 00:18:39.929 "ffdhe4096", 00:18:39.929 "ffdhe6144", 00:18:39.929 "ffdhe8192" 00:18:39.929 ] 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "nvmf_set_max_subsystems", 00:18:39.929 "params": { 00:18:39.929 "max_subsystems": 1024 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "nvmf_set_crdt", 00:18:39.929 "params": { 00:18:39.929 "crdt1": 0, 00:18:39.929 "crdt2": 0, 00:18:39.929 "crdt3": 0 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "nvmf_create_transport", 00:18:39.929 "params": { 00:18:39.929 "trtype": "TCP", 00:18:39.929 "max_queue_depth": 128, 00:18:39.929 "max_io_qpairs_per_ctrlr": 127, 00:18:39.929 "in_capsule_data_size": 4096, 00:18:39.929 "max_io_size": 131072, 00:18:39.929 "io_unit_size": 131072, 00:18:39.929 "max_aq_depth": 128, 00:18:39.929 "num_shared_buffers": 511, 00:18:39.929 "buf_cache_size": 4294967295, 00:18:39.929 "dif_insert_or_strip": false, 00:18:39.929 "zcopy": false, 00:18:39.929 "c2h_success": false, 00:18:39.929 "sock_priority": 0, 00:18:39.929 "abort_timeout_sec": 1, 00:18:39.929 "ack_timeout": 0, 00:18:39.929 "data_wr_pool_size": 0 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "nvmf_create_subsystem", 00:18:39.929 "params": { 00:18:39.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.929 "allow_any_host": false, 00:18:39.929 "serial_number": "00000000000000000000", 00:18:39.929 "model_number": "SPDK bdev Controller", 00:18:39.929 "max_namespaces": 32, 00:18:39.929 "min_cntlid": 1, 00:18:39.929 "max_cntlid": 65519, 00:18:39.929 "ana_reporting": false 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "nvmf_subsystem_add_host", 00:18:39.929 "params": { 00:18:39.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.929 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.929 "psk": "key0" 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "method": "nvmf_subsystem_add_ns", 00:18:39.929 "params": { 00:18:39.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.929 "namespace": { 00:18:39.929 "nsid": 1, 00:18:39.929 "bdev_name": "malloc0", 00:18:39.929 "nguid": "0AF0E696EF3A4CA9BF161835AC438AB8", 00:18:39.929 "uuid": "0af0e696-ef3a-4ca9-bf16-1835ac438ab8", 00:18:39.929 "no_auto_visible": false 00:18:39.929 } 00:18:39.929 } 00:18:39.929 }, 00:18:39.929 { 00:18:39.930 "method": "nvmf_subsystem_add_listener", 00:18:39.930 "params": { 00:18:39.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.930 "listen_address": { 00:18:39.930 "trtype": "TCP", 00:18:39.930 "adrfam": "IPv4", 00:18:39.930 "traddr": "10.0.0.2", 00:18:39.930 "trsvcid": "4420" 00:18:39.930 }, 00:18:39.930 "secure_channel": false, 00:18:39.930 "sock_impl": "ssl" 00:18:39.930 } 00:18:39.930 } 00:18:39.930 ] 00:18:39.930 } 00:18:39.930 ] 00:18:39.930 }' 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3667253 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3667253 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3667253 ']' 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.930 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.189 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 [2024-11-20 18:56:02.300833] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:40.189 [2024-11-20 18:56:02.300881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.189 [2024-11-20 18:56:02.379918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.189 [2024-11-20 18:56:02.415133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.189 [2024-11-20 18:56:02.415182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.189 [2024-11-20 18:56:02.415188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.189 [2024-11-20 18:56:02.415194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.189 [2024-11-20 18:56:02.415198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.189 [2024-11-20 18:56:02.415812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.449 [2024-11-20 18:56:02.629032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.449 [2024-11-20 18:56:02.661073] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.449 [2024-11-20 18:56:02.661296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3667498 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3667498 /var/tmp/bdevperf.sock 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3667498 ']' 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.018 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:41.018 "subsystems": [ 00:18:41.018 { 00:18:41.018 "subsystem": "keyring", 00:18:41.018 "config": [ 00:18:41.018 { 00:18:41.018 "method": "keyring_file_add_key", 00:18:41.018 "params": { 00:18:41.018 "name": "key0", 00:18:41.018 "path": "/tmp/tmp.bkUCAVdjY3" 00:18:41.018 } 00:18:41.018 } 00:18:41.018 ] 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "subsystem": "iobuf", 00:18:41.018 "config": [ 00:18:41.018 { 00:18:41.018 "method": "iobuf_set_options", 00:18:41.018 "params": { 00:18:41.018 "small_pool_count": 8192, 00:18:41.018 "large_pool_count": 1024, 00:18:41.018 "small_bufsize": 8192, 00:18:41.018 "large_bufsize": 135168, 00:18:41.018 "enable_numa": false 00:18:41.018 } 00:18:41.018 } 00:18:41.018 ] 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "subsystem": "sock", 00:18:41.018 "config": [ 00:18:41.018 { 00:18:41.018 "method": "sock_set_default_impl", 00:18:41.018 "params": { 00:18:41.018 "impl_name": "posix" 00:18:41.018 } 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "method": "sock_impl_set_options", 00:18:41.018 "params": { 00:18:41.018 "impl_name": "ssl", 00:18:41.018 "recv_buf_size": 4096, 00:18:41.018 "send_buf_size": 4096, 00:18:41.018 "enable_recv_pipe": true, 00:18:41.018 "enable_quickack": false, 00:18:41.018 "enable_placement_id": 0, 00:18:41.018 "enable_zerocopy_send_server": true, 00:18:41.018 "enable_zerocopy_send_client": false, 00:18:41.018 "zerocopy_threshold": 0, 00:18:41.018 "tls_version": 0, 00:18:41.018 "enable_ktls": false 00:18:41.018 } 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "method": "sock_impl_set_options", 00:18:41.018 "params": { 00:18:41.018 "impl_name": "posix", 00:18:41.018 "recv_buf_size": 2097152, 00:18:41.018 "send_buf_size": 2097152, 00:18:41.018 "enable_recv_pipe": true, 00:18:41.018 "enable_quickack": false, 00:18:41.018 "enable_placement_id": 0, 00:18:41.018 "enable_zerocopy_send_server": true, 00:18:41.018 "enable_zerocopy_send_client": false, 00:18:41.018 "zerocopy_threshold": 0, 00:18:41.018 "tls_version": 0, 00:18:41.018 "enable_ktls": false 00:18:41.018 } 00:18:41.018 } 00:18:41.018 ] 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "subsystem": "vmd", 00:18:41.018 "config": [] 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "subsystem": "accel", 00:18:41.018 "config": [ 00:18:41.018 { 00:18:41.018 "method": "accel_set_options", 00:18:41.018 "params": { 00:18:41.018 "small_cache_size": 128, 00:18:41.018 "large_cache_size": 16, 00:18:41.018 "task_count": 2048, 00:18:41.018 "sequence_count": 2048, 00:18:41.018 "buf_count": 2048 00:18:41.018 } 00:18:41.018 } 00:18:41.018 ] 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "subsystem": "bdev", 00:18:41.018 "config": [ 00:18:41.018 { 00:18:41.018 "method": "bdev_set_options", 00:18:41.018 "params": { 00:18:41.018 "bdev_io_pool_size": 65535, 00:18:41.018 "bdev_io_cache_size": 256, 00:18:41.018 "bdev_auto_examine": true, 00:18:41.018 "iobuf_small_cache_size": 128, 00:18:41.018 "iobuf_large_cache_size": 16 00:18:41.018 } 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "method": "bdev_raid_set_options", 00:18:41.018 "params": { 00:18:41.018 "process_window_size_kb": 1024, 00:18:41.018 "process_max_bandwidth_mb_sec": 0 00:18:41.018 } 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "method": "bdev_iscsi_set_options", 00:18:41.018 "params": { 00:18:41.018 "timeout_sec": 30 00:18:41.018 } 00:18:41.018 }, 00:18:41.018 { 00:18:41.018 "method": "bdev_nvme_set_options", 00:18:41.018 "params": { 00:18:41.018 "action_on_timeout": "none", 00:18:41.018 "timeout_us": 0, 00:18:41.018 "timeout_admin_us": 0, 00:18:41.018 "keep_alive_timeout_ms": 10000, 00:18:41.018 "arbitration_burst": 0, 00:18:41.018 "low_priority_weight": 0, 00:18:41.018 "medium_priority_weight": 0, 00:18:41.018 "high_priority_weight": 0, 00:18:41.018 "nvme_adminq_poll_period_us": 10000, 00:18:41.018 "nvme_ioq_poll_period_us": 0, 00:18:41.018 "io_queue_requests": 512, 00:18:41.018 "delay_cmd_submit": true, 00:18:41.018 "transport_retry_count": 4, 00:18:41.018 "bdev_retry_count": 3, 00:18:41.018 "transport_ack_timeout": 0, 00:18:41.018 "ctrlr_loss_timeout_sec": 0, 00:18:41.018 "reconnect_delay_sec": 0, 00:18:41.018 "fast_io_fail_timeout_sec": 0, 00:18:41.018 "disable_auto_failback": false, 00:18:41.018 "generate_uuids": false, 00:18:41.019 "transport_tos": 0, 00:18:41.019 "nvme_error_stat": false, 00:18:41.019 "rdma_srq_size": 0, 00:18:41.019 "io_path_stat": false, 00:18:41.019 "allow_accel_sequence": false, 00:18:41.019 "rdma_max_cq_size": 0, 00:18:41.019 "rdma_cm_event_timeout_ms": 0, 00:18:41.019 "dhchap_digests": [ 00:18:41.019 "sha256", 00:18:41.019 "sha384", 00:18:41.019 "sha512" 00:18:41.019 ], 00:18:41.019 "dhchap_dhgroups": [ 00:18:41.019 "null", 00:18:41.019 "ffdhe2048", 00:18:41.019 "ffdhe3072", 00:18:41.019 "ffdhe4096", 00:18:41.019 "ffdhe6144", 00:18:41.019 "ffdhe8192" 00:18:41.019 ] 00:18:41.019 } 00:18:41.019 }, 00:18:41.019 { 00:18:41.019 "method": "bdev_nvme_attach_controller", 00:18:41.019 "params": { 00:18:41.019 "name": "nvme0", 00:18:41.019 "trtype": "TCP", 00:18:41.019 "adrfam": "IPv4", 00:18:41.019 "traddr": "10.0.0.2", 00:18:41.019 "trsvcid": "4420", 00:18:41.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.019 "prchk_reftag": false, 00:18:41.019 "prchk_guard": false, 00:18:41.019 "ctrlr_loss_timeout_sec": 0, 00:18:41.019 "reconnect_delay_sec": 0, 00:18:41.019 "fast_io_fail_timeout_sec": 0, 00:18:41.019 "psk": "key0", 00:18:41.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.019 "hdgst": false, 00:18:41.019 "ddgst": false, 00:18:41.019 "multipath": "multipath" 00:18:41.019 } 00:18:41.019 }, 00:18:41.019 { 00:18:41.019 "method": "bdev_nvme_set_hotplug", 00:18:41.019 "params": { 00:18:41.019 "period_us": 100000, 00:18:41.019 "enable": false 00:18:41.019 } 00:18:41.019 }, 00:18:41.019 { 00:18:41.019 "method": "bdev_enable_histogram", 00:18:41.019 "params": { 00:18:41.019 "name": "nvme0n1", 00:18:41.019 "enable": true 00:18:41.019 } 00:18:41.019 }, 00:18:41.019 { 00:18:41.019 "method": "bdev_wait_for_examine" 00:18:41.019 } 00:18:41.019 ] 00:18:41.019 }, 00:18:41.019 { 00:18:41.019 "subsystem": "nbd", 00:18:41.019 "config": [] 00:18:41.019 } 00:18:41.019 ] 00:18:41.019 }' 00:18:41.019 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.019 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.019 [2024-11-20 18:56:03.198471] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:41.019 [2024-11-20 18:56:03.198515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667498 ] 00:18:41.019 [2024-11-20 18:56:03.271803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.019 [2024-11-20 18:56:03.313119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.278 [2024-11-20 18:56:03.467041] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.846 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.846 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.846 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:41.846 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:42.106 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.106 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.106 Running I/O for 1 seconds... 00:18:43.043 5347.00 IOPS, 20.89 MiB/s 00:18:43.043 Latency(us) 00:18:43.043 [2024-11-20T17:56:05.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.043 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.043 Verification LBA range: start 0x0 length 0x2000 00:18:43.043 nvme0n1 : 1.02 5372.02 20.98 0.00 0.00 23623.77 4962.01 35701.52 00:18:43.043 [2024-11-20T17:56:05.368Z] =================================================================================================================== 00:18:43.043 [2024-11-20T17:56:05.368Z] Total : 5372.02 20.98 0.00 0.00 23623.77 4962.01 35701.52 00:18:43.043 { 00:18:43.043 "results": [ 00:18:43.043 { 00:18:43.043 "job": "nvme0n1", 00:18:43.043 "core_mask": "0x2", 00:18:43.043 "workload": "verify", 00:18:43.043 "status": "finished", 00:18:43.043 "verify_range": { 00:18:43.043 "start": 0, 00:18:43.043 "length": 8192 00:18:43.043 }, 00:18:43.043 "queue_depth": 128, 00:18:43.043 "io_size": 4096, 00:18:43.043 "runtime": 1.019355, 00:18:43.043 "iops": 5372.024466451825, 00:18:43.043 "mibps": 20.98447057207744, 00:18:43.043 "io_failed": 0, 00:18:43.043 "io_timeout": 0, 00:18:43.043 "avg_latency_us": 23623.774389370068, 00:18:43.043 "min_latency_us": 4962.011428571429, 00:18:43.043 "max_latency_us": 35701.51619047619 00:18:43.043 } 00:18:43.043 ], 00:18:43.043 "core_count": 1 00:18:43.043 } 00:18:43.043 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:43.043 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:43.043 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:43.043 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:43.043 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:43.043 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:43.043 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:43.303 nvmf_trace.0 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3667498 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3667498 ']' 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3667498 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3667498 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3667498' 00:18:43.303 killing process with pid 3667498 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3667498 00:18:43.303 Received shutdown signal, test time was about 1.000000 seconds 00:18:43.303 00:18:43.303 Latency(us) 00:18:43.303 [2024-11-20T17:56:05.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.303 [2024-11-20T17:56:05.628Z] =================================================================================================================== 00:18:43.303 [2024-11-20T17:56:05.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.303 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3667498 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.562 rmmod nvme_tcp 00:18:43.562 rmmod nvme_fabrics 00:18:43.562 rmmod nvme_keyring 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3667253 ']' 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3667253 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3667253 ']' 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3667253 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3667253 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.562 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.563 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3667253' 00:18:43.563 killing process with pid 3667253 00:18:43.563 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3667253 00:18:43.563 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3667253 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.822 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.728 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:45.728 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.cAAKj9WAS0 /tmp/tmp.gS2czxIAT8 /tmp/tmp.bkUCAVdjY3 00:18:45.728 00:18:45.728 real 1m19.943s 00:18:45.728 user 2m1.633s 00:18:45.728 sys 0m31.641s 00:18:45.728 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.728 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.728 ************************************ 00:18:45.728 END TEST nvmf_tls 00:18:45.728 ************************************ 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.988 ************************************ 00:18:45.988 START TEST nvmf_fips 00:18:45.988 ************************************ 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.988 * Looking for test storage... 00:18:45.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.988 --rc genhtml_branch_coverage=1 00:18:45.988 --rc genhtml_function_coverage=1 00:18:45.988 --rc genhtml_legend=1 00:18:45.988 --rc geninfo_all_blocks=1 00:18:45.988 --rc geninfo_unexecuted_blocks=1 00:18:45.988 00:18:45.988 ' 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.988 --rc genhtml_branch_coverage=1 00:18:45.988 --rc genhtml_function_coverage=1 00:18:45.988 --rc genhtml_legend=1 00:18:45.988 --rc geninfo_all_blocks=1 00:18:45.988 --rc geninfo_unexecuted_blocks=1 00:18:45.988 00:18:45.988 ' 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.988 --rc genhtml_branch_coverage=1 00:18:45.988 --rc genhtml_function_coverage=1 00:18:45.988 --rc genhtml_legend=1 00:18:45.988 --rc geninfo_all_blocks=1 00:18:45.988 --rc geninfo_unexecuted_blocks=1 00:18:45.988 00:18:45.988 ' 00:18:45.988 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.988 --rc genhtml_branch_coverage=1 00:18:45.988 --rc genhtml_function_coverage=1 00:18:45.988 --rc genhtml_legend=1 00:18:45.988 --rc geninfo_all_blocks=1 00:18:45.988 --rc geninfo_unexecuted_blocks=1 00:18:45.988 00:18:45.989 ' 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:45.989 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:46.249 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:46.250 Error setting digest 00:18:46.250 40421C7D6D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:46.250 40421C7D6D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.250 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.820 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:52.821 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:52.821 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:52.821 Found net devices under 0000:86:00.0: cvl_0_0 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:52.821 Found net devices under 0000:86:00.1: cvl_0_1 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:18:52.821 00:18:52.821 --- 10.0.0.2 ping statistics --- 00:18:52.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.821 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:18:52.821 00:18:52.821 --- 10.0.0.1 ping statistics --- 00:18:52.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.821 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3671513 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3671513 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3671513 ']' 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.821 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.822 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:52.822 [2024-11-20 18:56:14.493267] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:52.822 [2024-11-20 18:56:14.493314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.822 [2024-11-20 18:56:14.569113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.822 [2024-11-20 18:56:14.607105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.822 [2024-11-20 18:56:14.607139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.822 [2024-11-20 18:56:14.607147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.822 [2024-11-20 18:56:14.607152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.822 [2024-11-20 18:56:14.607157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.822 [2024-11-20 18:56:14.607728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Ibs 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Ibs 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Ibs 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Ibs 00:18:53.081 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.340 [2024-11-20 18:56:15.533494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.340 [2024-11-20 18:56:15.549489] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.340 [2024-11-20 18:56:15.549693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.340 malloc0 00:18:53.340 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3671664 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3671664 /var/tmp/bdevperf.sock 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3671664 ']' 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.341 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:53.599 [2024-11-20 18:56:15.684901] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:18:53.599 [2024-11-20 18:56:15.684960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671664 ] 00:18:53.599 [2024-11-20 18:56:15.763726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.599 [2024-11-20 18:56:15.806197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.536 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.536 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:54.536 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Ibs 00:18:54.536 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.795 [2024-11-20 18:56:16.868516] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.795 TLSTESTn1 00:18:54.795 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.795 Running I/O for 10 seconds... 00:18:57.104 5528.00 IOPS, 21.59 MiB/s [2024-11-20T17:56:20.365Z] 5512.50 IOPS, 21.53 MiB/s [2024-11-20T17:56:21.300Z] 5360.33 IOPS, 20.94 MiB/s [2024-11-20T17:56:22.236Z] 5203.25 IOPS, 20.33 MiB/s [2024-11-20T17:56:23.173Z] 5155.20 IOPS, 20.14 MiB/s [2024-11-20T17:56:24.109Z] 5133.17 IOPS, 20.05 MiB/s [2024-11-20T17:56:25.486Z] 5124.71 IOPS, 20.02 MiB/s [2024-11-20T17:56:26.422Z] 5125.88 IOPS, 20.02 MiB/s [2024-11-20T17:56:27.359Z] 5117.56 IOPS, 19.99 MiB/s [2024-11-20T17:56:27.359Z] 5097.70 IOPS, 19.91 MiB/s 00:19:05.034 Latency(us) 00:19:05.034 [2024-11-20T17:56:27.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.034 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.034 Verification LBA range: start 0x0 length 0x2000 00:19:05.034 TLSTESTn1 : 10.02 5101.84 19.93 0.00 0.00 25053.09 5149.26 31082.79 00:19:05.034 [2024-11-20T17:56:27.359Z] =================================================================================================================== 00:19:05.034 [2024-11-20T17:56:27.359Z] Total : 5101.84 19.93 0.00 0.00 25053.09 5149.26 31082.79 00:19:05.034 { 00:19:05.034 "results": [ 00:19:05.034 { 00:19:05.034 "job": "TLSTESTn1", 00:19:05.034 "core_mask": "0x4", 00:19:05.034 "workload": "verify", 00:19:05.034 "status": "finished", 00:19:05.034 "verify_range": { 00:19:05.034 "start": 0, 00:19:05.034 "length": 8192 00:19:05.034 }, 00:19:05.034 "queue_depth": 128, 00:19:05.034 "io_size": 4096, 00:19:05.034 "runtime": 10.016975, 00:19:05.034 "iops": 5101.839627232773, 00:19:05.034 "mibps": 19.92906104387802, 00:19:05.034 "io_failed": 0, 00:19:05.034 "io_timeout": 0, 00:19:05.034 "avg_latency_us": 25053.08727767761, 00:19:05.034 "min_latency_us": 5149.257142857143, 00:19:05.034 "max_latency_us": 31082.788571428573 00:19:05.034 } 00:19:05.034 ], 00:19:05.034 "core_count": 1 00:19:05.034 } 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:05.034 nvmf_trace.0 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3671664 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3671664 ']' 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3671664 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671664 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671664' 00:19:05.034 killing process with pid 3671664 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3671664 00:19:05.034 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.034 00:19:05.034 Latency(us) 00:19:05.034 [2024-11-20T17:56:27.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.034 [2024-11-20T17:56:27.359Z] =================================================================================================================== 00:19:05.034 [2024-11-20T17:56:27.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.034 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3671664 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:05.294 rmmod nvme_tcp 00:19:05.294 rmmod nvme_fabrics 00:19:05.294 rmmod nvme_keyring 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3671513 ']' 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3671513 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3671513 ']' 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3671513 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671513 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671513' 00:19:05.294 killing process with pid 3671513 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3671513 00:19:05.294 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3671513 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.554 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.457 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:07.457 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Ibs 00:19:07.457 00:19:07.457 real 0m21.661s 00:19:07.457 user 0m22.559s 00:19:07.457 sys 0m10.508s 00:19:07.457 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.457 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.457 ************************************ 00:19:07.457 END TEST nvmf_fips 00:19:07.457 ************************************ 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.717 ************************************ 00:19:07.717 START TEST nvmf_control_msg_list 00:19:07.717 ************************************ 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:07.717 * Looking for test storage... 00:19:07.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.717 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.717 --rc genhtml_branch_coverage=1 00:19:07.717 --rc genhtml_function_coverage=1 00:19:07.717 --rc genhtml_legend=1 00:19:07.717 --rc geninfo_all_blocks=1 00:19:07.717 --rc geninfo_unexecuted_blocks=1 00:19:07.717 00:19:07.717 ' 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.717 --rc genhtml_branch_coverage=1 00:19:07.717 --rc genhtml_function_coverage=1 00:19:07.717 --rc genhtml_legend=1 00:19:07.717 --rc geninfo_all_blocks=1 00:19:07.717 --rc geninfo_unexecuted_blocks=1 00:19:07.717 00:19:07.717 ' 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.717 --rc genhtml_branch_coverage=1 00:19:07.717 --rc genhtml_function_coverage=1 00:19:07.717 --rc genhtml_legend=1 00:19:07.717 --rc geninfo_all_blocks=1 00:19:07.717 --rc geninfo_unexecuted_blocks=1 00:19:07.717 00:19:07.717 ' 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.717 --rc genhtml_branch_coverage=1 00:19:07.717 --rc genhtml_function_coverage=1 00:19:07.717 --rc genhtml_legend=1 00:19:07.717 --rc geninfo_all_blocks=1 00:19:07.717 --rc geninfo_unexecuted_blocks=1 00:19:07.717 00:19:07.717 ' 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:07.717 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.718 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.718 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.718 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.718 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.718 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.718 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.979 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.980 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:14.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:14.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:14.680 Found net devices under 0000:86:00.0: cvl_0_0 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:14.680 Found net devices under 0000:86:00.1: cvl_0_1 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.680 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:14.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:19:14.681 00:19:14.681 --- 10.0.0.2 ping statistics --- 00:19:14.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.681 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:19:14.681 00:19:14.681 --- 10.0.0.1 ping statistics --- 00:19:14.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.681 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.681 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3677135 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3677135 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3677135 ']' 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 [2024-11-20 18:56:36.089679] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:19:14.681 [2024-11-20 18:56:36.089725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.681 [2024-11-20 18:56:36.169008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.681 [2024-11-20 18:56:36.210106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.681 [2024-11-20 18:56:36.210141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.681 [2024-11-20 18:56:36.210149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.681 [2024-11-20 18:56:36.210154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.681 [2024-11-20 18:56:36.210160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.681 [2024-11-20 18:56:36.210720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 [2024-11-20 18:56:36.969130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 Malloc0 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.681 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:14.941 [2024-11-20 18:56:37.009510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3677284 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3677285 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3677286 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3677284 00:19:14.941 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.941 [2024-11-20 18:56:37.087988] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:14.941 [2024-11-20 18:56:37.107943] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:14.941 [2024-11-20 18:56:37.108090] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.878 Initializing NVMe Controllers 00:19:15.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:15.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:15.878 Initialization complete. Launching workers. 00:19:15.878 ======================================================== 00:19:15.878 Latency(us) 00:19:15.878 Device Information : IOPS MiB/s Average min max 00:19:15.878 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6482.00 25.32 153.93 125.97 361.02 00:19:15.878 ======================================================== 00:19:15.878 Total : 6482.00 25.32 153.93 125.97 361.02 00:19:15.878 00:19:15.878 Initializing NVMe Controllers 00:19:15.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:15.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:15.878 Initialization complete. Launching workers. 00:19:15.878 ======================================================== 00:19:15.878 Latency(us) 00:19:15.878 Device Information : IOPS MiB/s Average min max 00:19:15.878 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40880.07 40400.85 41000.61 00:19:15.878 ======================================================== 00:19:15.878 Total : 25.00 0.10 40880.07 40400.85 41000.61 00:19:15.878 00:19:15.878 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3677285 00:19:15.878 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3677286 00:19:16.138 Initializing NVMe Controllers 00:19:16.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:16.138 Initialization complete. Launching workers. 00:19:16.138 ======================================================== 00:19:16.138 Latency(us) 00:19:16.138 Device Information : IOPS MiB/s Average min max 00:19:16.138 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6321.98 24.70 157.83 140.72 359.20 00:19:16.138 ======================================================== 00:19:16.138 Total : 6321.98 24.70 157.83 140.72 359.20 00:19:16.138 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.138 rmmod nvme_tcp 00:19:16.138 rmmod nvme_fabrics 00:19:16.138 rmmod nvme_keyring 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3677135 ']' 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3677135 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3677135 ']' 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3677135 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3677135 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3677135' 00:19:16.138 killing process with pid 3677135 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3677135 00:19:16.138 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3677135 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.398 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.306 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:18.306 00:19:18.306 real 0m10.777s 00:19:18.306 user 0m7.253s 00:19:18.306 sys 0m5.580s 00:19:18.306 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.306 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.306 ************************************ 00:19:18.306 END TEST nvmf_control_msg_list 00:19:18.306 ************************************ 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.566 ************************************ 00:19:18.566 START TEST nvmf_wait_for_buf 00:19:18.566 ************************************ 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:18.566 * Looking for test storage... 00:19:18.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.566 --rc genhtml_branch_coverage=1 00:19:18.566 --rc genhtml_function_coverage=1 00:19:18.566 --rc genhtml_legend=1 00:19:18.566 --rc geninfo_all_blocks=1 00:19:18.566 --rc geninfo_unexecuted_blocks=1 00:19:18.566 00:19:18.566 ' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.566 --rc genhtml_branch_coverage=1 00:19:18.566 --rc genhtml_function_coverage=1 00:19:18.566 --rc genhtml_legend=1 00:19:18.566 --rc geninfo_all_blocks=1 00:19:18.566 --rc geninfo_unexecuted_blocks=1 00:19:18.566 00:19:18.566 ' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.566 --rc genhtml_branch_coverage=1 00:19:18.566 --rc genhtml_function_coverage=1 00:19:18.566 --rc genhtml_legend=1 00:19:18.566 --rc geninfo_all_blocks=1 00:19:18.566 --rc geninfo_unexecuted_blocks=1 00:19:18.566 00:19:18.566 ' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.566 --rc genhtml_branch_coverage=1 00:19:18.566 --rc genhtml_function_coverage=1 00:19:18.566 --rc genhtml_legend=1 00:19:18.566 --rc geninfo_all_blocks=1 00:19:18.566 --rc geninfo_unexecuted_blocks=1 00:19:18.566 00:19:18.566 ' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.566 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:18.567 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.827 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:25.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:25.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:25.399 Found net devices under 0000:86:00.0: cvl_0_0 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:25.399 Found net devices under 0000:86:00.1: cvl_0_1 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.399 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:25.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:19:25.400 00:19:25.400 --- 10.0.0.2 ping statistics --- 00:19:25.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.400 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:19:25.400 00:19:25.400 --- 10.0.0.1 ping statistics --- 00:19:25.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.400 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3680934 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3680934 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3680934 ']' 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.400 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.400 [2024-11-20 18:56:46.877682] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:19:25.400 [2024-11-20 18:56:46.877734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.400 [2024-11-20 18:56:46.959516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.400 [2024-11-20 18:56:47.003184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.400 [2024-11-20 18:56:47.003225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.400 [2024-11-20 18:56:47.003233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.400 [2024-11-20 18:56:47.003239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.400 [2024-11-20 18:56:47.003245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.400 [2024-11-20 18:56:47.003816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.400 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.400 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:25.400 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.400 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.400 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.659 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.660 Malloc0 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.660 [2024-11-20 18:56:47.850602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.660 [2024-11-20 18:56:47.878805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.660 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:25.660 [2024-11-20 18:56:47.961278] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:27.567 Initializing NVMe Controllers 00:19:27.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:27.567 Initialization complete. Launching workers. 00:19:27.567 ======================================================== 00:19:27.567 Latency(us) 00:19:27.567 Device Information : IOPS MiB/s Average min max 00:19:27.567 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33571.39 7278.11 71831.84 00:19:27.567 ======================================================== 00:19:27.567 Total : 124.00 15.50 33571.39 7278.11 71831.84 00:19:27.567 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.567 rmmod nvme_tcp 00:19:27.567 rmmod nvme_fabrics 00:19:27.567 rmmod nvme_keyring 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3680934 ']' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3680934 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3680934 ']' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3680934 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3680934 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3680934' 00:19:27.567 killing process with pid 3680934 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3680934 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3680934 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.567 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.102 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:30.102 00:19:30.102 real 0m11.230s 00:19:30.102 user 0m4.851s 00:19:30.102 sys 0m4.997s 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.103 ************************************ 00:19:30.103 END TEST nvmf_wait_for_buf 00:19:30.103 ************************************ 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:30.103 18:56:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.378 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:35.378 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:35.379 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:35.379 Found net devices under 0000:86:00.0: cvl_0_0 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:35.379 Found net devices under 0000:86:00.1: cvl_0_1 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 ************************************ 00:19:35.379 START TEST nvmf_perf_adq 00:19:35.379 ************************************ 00:19:35.379 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:35.639 * Looking for test storage... 00:19:35.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:35.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.639 --rc genhtml_branch_coverage=1 00:19:35.639 --rc genhtml_function_coverage=1 00:19:35.639 --rc genhtml_legend=1 00:19:35.639 --rc geninfo_all_blocks=1 00:19:35.639 --rc geninfo_unexecuted_blocks=1 00:19:35.639 00:19:35.639 ' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:35.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.639 --rc genhtml_branch_coverage=1 00:19:35.639 --rc genhtml_function_coverage=1 00:19:35.639 --rc genhtml_legend=1 00:19:35.639 --rc geninfo_all_blocks=1 00:19:35.639 --rc geninfo_unexecuted_blocks=1 00:19:35.639 00:19:35.639 ' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:35.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.639 --rc genhtml_branch_coverage=1 00:19:35.639 --rc genhtml_function_coverage=1 00:19:35.639 --rc genhtml_legend=1 00:19:35.639 --rc geninfo_all_blocks=1 00:19:35.639 --rc geninfo_unexecuted_blocks=1 00:19:35.639 00:19:35.639 ' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:35.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.639 --rc genhtml_branch_coverage=1 00:19:35.639 --rc genhtml_function_coverage=1 00:19:35.639 --rc genhtml_legend=1 00:19:35.639 --rc geninfo_all_blocks=1 00:19:35.639 --rc geninfo_unexecuted_blocks=1 00:19:35.639 00:19:35.639 ' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.639 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.640 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:42.216 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:42.216 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:42.216 Found net devices under 0000:86:00.0: cvl_0_0 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:42.216 Found net devices under 0000:86:00.1: cvl_0_1 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:42.216 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:42.474 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:45.008 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:50.296 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:50.296 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:50.296 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.296 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:50.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:50.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:50.297 Found net devices under 0000:86:00.0: cvl_0_0 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:50.297 Found net devices under 0000:86:00.1: cvl_0_1 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:50.297 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.297 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.297 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.297 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:50.297 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:50.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:19:50.297 00:19:50.298 --- 10.0.0.2 ping statistics --- 00:19:50.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.298 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:19:50.298 00:19:50.298 --- 10.0.0.1 ping statistics --- 00:19:50.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.298 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3690005 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3690005 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3690005 ']' 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 [2024-11-20 18:57:12.176774] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:19:50.298 [2024-11-20 18:57:12.176817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.298 [2024-11-20 18:57:12.253214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.298 [2024-11-20 18:57:12.297164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.298 [2024-11-20 18:57:12.297204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.298 [2024-11-20 18:57:12.297211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.298 [2024-11-20 18:57:12.297217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.298 [2024-11-20 18:57:12.297222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.298 [2024-11-20 18:57:12.298748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.298 [2024-11-20 18:57:12.298855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.298 [2024-11-20 18:57:12.298966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.298 [2024-11-20 18:57:12.298967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 [2024-11-20 18:57:12.509595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 Malloc1 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 [2024-11-20 18:57:12.572126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3690056 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:50.298 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:52.832 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:52.832 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.832 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.832 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.832 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:52.832 "tick_rate": 2100000000, 00:19:52.832 "poll_groups": [ 00:19:52.832 { 00:19:52.832 "name": "nvmf_tgt_poll_group_000", 00:19:52.832 "admin_qpairs": 1, 00:19:52.832 "io_qpairs": 1, 00:19:52.832 "current_admin_qpairs": 1, 00:19:52.832 "current_io_qpairs": 1, 00:19:52.832 "pending_bdev_io": 0, 00:19:52.832 "completed_nvme_io": 19328, 00:19:52.832 "transports": [ 00:19:52.833 { 00:19:52.833 "trtype": "TCP" 00:19:52.833 } 00:19:52.833 ] 00:19:52.833 }, 00:19:52.833 { 00:19:52.833 "name": "nvmf_tgt_poll_group_001", 00:19:52.833 "admin_qpairs": 0, 00:19:52.833 "io_qpairs": 1, 00:19:52.833 "current_admin_qpairs": 0, 00:19:52.833 "current_io_qpairs": 1, 00:19:52.833 "pending_bdev_io": 0, 00:19:52.833 "completed_nvme_io": 19830, 00:19:52.833 "transports": [ 00:19:52.833 { 00:19:52.833 "trtype": "TCP" 00:19:52.833 } 00:19:52.833 ] 00:19:52.833 }, 00:19:52.833 { 00:19:52.833 "name": "nvmf_tgt_poll_group_002", 00:19:52.833 "admin_qpairs": 0, 00:19:52.833 "io_qpairs": 1, 00:19:52.833 "current_admin_qpairs": 0, 00:19:52.833 "current_io_qpairs": 1, 00:19:52.833 "pending_bdev_io": 0, 00:19:52.833 "completed_nvme_io": 19596, 00:19:52.833 "transports": [ 00:19:52.833 { 00:19:52.833 "trtype": "TCP" 00:19:52.833 } 00:19:52.833 ] 00:19:52.833 }, 00:19:52.833 { 00:19:52.833 "name": "nvmf_tgt_poll_group_003", 00:19:52.833 "admin_qpairs": 0, 00:19:52.833 "io_qpairs": 1, 00:19:52.833 "current_admin_qpairs": 0, 00:19:52.833 "current_io_qpairs": 1, 00:19:52.833 "pending_bdev_io": 0, 00:19:52.833 "completed_nvme_io": 19750, 00:19:52.833 "transports": [ 00:19:52.833 { 00:19:52.833 "trtype": "TCP" 00:19:52.833 } 00:19:52.833 ] 00:19:52.833 } 00:19:52.833 ] 00:19:52.833 }' 00:19:52.833 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:52.833 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:52.833 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:52.833 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:52.833 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3690056 00:20:00.953 Initializing NVMe Controllers 00:20:00.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:00.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:00.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:00.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:00.953 Initialization complete. Launching workers. 00:20:00.953 ======================================================== 00:20:00.953 Latency(us) 00:20:00.953 Device Information : IOPS MiB/s Average min max 00:20:00.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10428.60 40.74 6136.51 2123.09 10855.33 00:20:00.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10603.20 41.42 6036.37 2056.29 13584.47 00:20:00.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10457.00 40.85 6119.99 2368.60 10290.01 00:20:00.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10423.10 40.72 6141.00 2068.75 10379.78 00:20:00.953 ======================================================== 00:20:00.953 Total : 41911.90 163.72 6108.17 2056.29 13584.47 00:20:00.953 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:00.953 rmmod nvme_tcp 00:20:00.953 rmmod nvme_fabrics 00:20:00.953 rmmod nvme_keyring 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3690005 ']' 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3690005 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3690005 ']' 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3690005 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3690005 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3690005' 00:20:00.953 killing process with pid 3690005 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3690005 00:20:00.953 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3690005 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.953 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:00.954 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.954 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.954 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.858 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:02.858 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:02.858 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:02.858 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:04.235 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:06.139 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:11.416 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:11.416 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:11.417 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:11.417 Found net devices under 0000:86:00.0: cvl_0_0 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:11.417 Found net devices under 0000:86:00.1: cvl_0_1 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:11.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:20:11.417 00:20:11.417 --- 10.0.0.2 ping statistics --- 00:20:11.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.417 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:20:11.417 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:20:11.676 00:20:11.676 --- 10.0.0.1 ping statistics --- 00:20:11.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.676 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:11.676 net.core.busy_poll = 1 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:11.676 net.core.busy_read = 1 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:11.676 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3693900 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3693900 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3693900 ']' 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.935 [2024-11-20 18:57:34.081183] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:11.935 [2024-11-20 18:57:34.081243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.935 [2024-11-20 18:57:34.162366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.935 [2024-11-20 18:57:34.204908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.935 [2024-11-20 18:57:34.204944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.935 [2024-11-20 18:57:34.204951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.935 [2024-11-20 18:57:34.204960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.935 [2024-11-20 18:57:34.204965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.935 [2024-11-20 18:57:34.206495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.935 [2024-11-20 18:57:34.206609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.935 [2024-11-20 18:57:34.206718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.935 [2024-11-20 18:57:34.206719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.935 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.194 [2024-11-20 18:57:34.401085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.194 Malloc1 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.194 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.195 [2024-11-20 18:57:34.461064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3694073 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:12.195 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:14.730 "tick_rate": 2100000000, 00:20:14.730 "poll_groups": [ 00:20:14.730 { 00:20:14.730 "name": "nvmf_tgt_poll_group_000", 00:20:14.730 "admin_qpairs": 1, 00:20:14.730 "io_qpairs": 3, 00:20:14.730 "current_admin_qpairs": 1, 00:20:14.730 "current_io_qpairs": 3, 00:20:14.730 "pending_bdev_io": 0, 00:20:14.730 "completed_nvme_io": 30328, 00:20:14.730 "transports": [ 00:20:14.730 { 00:20:14.730 "trtype": "TCP" 00:20:14.730 } 00:20:14.730 ] 00:20:14.730 }, 00:20:14.730 { 00:20:14.730 "name": "nvmf_tgt_poll_group_001", 00:20:14.730 "admin_qpairs": 0, 00:20:14.730 "io_qpairs": 1, 00:20:14.730 "current_admin_qpairs": 0, 00:20:14.730 "current_io_qpairs": 1, 00:20:14.730 "pending_bdev_io": 0, 00:20:14.730 "completed_nvme_io": 27598, 00:20:14.730 "transports": [ 00:20:14.730 { 00:20:14.730 "trtype": "TCP" 00:20:14.730 } 00:20:14.730 ] 00:20:14.730 }, 00:20:14.730 { 00:20:14.730 "name": "nvmf_tgt_poll_group_002", 00:20:14.730 "admin_qpairs": 0, 00:20:14.730 "io_qpairs": 0, 00:20:14.730 "current_admin_qpairs": 0, 00:20:14.730 "current_io_qpairs": 0, 00:20:14.730 "pending_bdev_io": 0, 00:20:14.730 "completed_nvme_io": 0, 00:20:14.730 "transports": [ 00:20:14.730 { 00:20:14.730 "trtype": "TCP" 00:20:14.730 } 00:20:14.730 ] 00:20:14.730 }, 00:20:14.730 { 00:20:14.730 "name": "nvmf_tgt_poll_group_003", 00:20:14.730 "admin_qpairs": 0, 00:20:14.730 "io_qpairs": 0, 00:20:14.730 "current_admin_qpairs": 0, 00:20:14.730 "current_io_qpairs": 0, 00:20:14.730 "pending_bdev_io": 0, 00:20:14.730 "completed_nvme_io": 0, 00:20:14.730 "transports": [ 00:20:14.730 { 00:20:14.730 "trtype": "TCP" 00:20:14.730 } 00:20:14.730 ] 00:20:14.730 } 00:20:14.730 ] 00:20:14.730 }' 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:14.730 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3694073 00:20:22.852 Initializing NVMe Controllers 00:20:22.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:22.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:22.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:22.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:22.852 Initialization complete. Launching workers. 00:20:22.852 ======================================================== 00:20:22.852 Latency(us) 00:20:22.852 Device Information : IOPS MiB/s Average min max 00:20:22.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5611.90 21.92 11454.64 1272.08 59621.41 00:20:22.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5447.00 21.28 11753.60 1358.66 60049.89 00:20:22.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14900.50 58.21 4294.74 1478.51 45557.54 00:20:22.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4861.70 18.99 13166.45 1782.40 60508.45 00:20:22.852 ======================================================== 00:20:22.852 Total : 30821.09 120.39 8316.03 1272.08 60508.45 00:20:22.852 00:20:22.852 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:22.852 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.852 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:22.852 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.852 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:22.852 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.852 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.853 rmmod nvme_tcp 00:20:22.853 rmmod nvme_fabrics 00:20:22.853 rmmod nvme_keyring 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3693900 ']' 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3693900 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3693900 ']' 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3693900 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3693900 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3693900' 00:20:22.853 killing process with pid 3693900 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3693900 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3693900 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.853 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:26.146 00:20:26.146 real 0m50.433s 00:20:26.146 user 2m43.849s 00:20:26.146 sys 0m10.467s 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.146 ************************************ 00:20:26.146 END TEST nvmf_perf_adq 00:20:26.146 ************************************ 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.146 ************************************ 00:20:26.146 START TEST nvmf_shutdown 00:20:26.146 ************************************ 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:26.146 * Looking for test storage... 00:20:26.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:26.146 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:26.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.147 --rc genhtml_branch_coverage=1 00:20:26.147 --rc genhtml_function_coverage=1 00:20:26.147 --rc genhtml_legend=1 00:20:26.147 --rc geninfo_all_blocks=1 00:20:26.147 --rc geninfo_unexecuted_blocks=1 00:20:26.147 00:20:26.147 ' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:26.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.147 --rc genhtml_branch_coverage=1 00:20:26.147 --rc genhtml_function_coverage=1 00:20:26.147 --rc genhtml_legend=1 00:20:26.147 --rc geninfo_all_blocks=1 00:20:26.147 --rc geninfo_unexecuted_blocks=1 00:20:26.147 00:20:26.147 ' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:26.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.147 --rc genhtml_branch_coverage=1 00:20:26.147 --rc genhtml_function_coverage=1 00:20:26.147 --rc genhtml_legend=1 00:20:26.147 --rc geninfo_all_blocks=1 00:20:26.147 --rc geninfo_unexecuted_blocks=1 00:20:26.147 00:20:26.147 ' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:26.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.147 --rc genhtml_branch_coverage=1 00:20:26.147 --rc genhtml_function_coverage=1 00:20:26.147 --rc genhtml_legend=1 00:20:26.147 --rc geninfo_all_blocks=1 00:20:26.147 --rc geninfo_unexecuted_blocks=1 00:20:26.147 00:20:26.147 ' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:26.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:26.147 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:26.148 ************************************ 00:20:26.148 START TEST nvmf_shutdown_tc1 00:20:26.148 ************************************ 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:26.148 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:32.720 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:32.721 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:32.721 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:32.721 Found net devices under 0000:86:00.0: cvl_0_0 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:32.721 Found net devices under 0000:86:00.1: cvl_0_1 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:32.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:20:32.721 00:20:32.721 --- 10.0.0.2 ping statistics --- 00:20:32.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.721 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:20:32.721 00:20:32.721 --- 10.0.0.1 ping statistics --- 00:20:32.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.721 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.721 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3699522 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3699522 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3699522 ']' 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.722 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:32.722 [2024-11-20 18:57:54.462165] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:32.722 [2024-11-20 18:57:54.462219] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.722 [2024-11-20 18:57:54.539910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.722 [2024-11-20 18:57:54.582727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.722 [2024-11-20 18:57:54.582762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.722 [2024-11-20 18:57:54.582769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.722 [2024-11-20 18:57:54.582775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.722 [2024-11-20 18:57:54.582780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.722 [2024-11-20 18:57:54.584359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.722 [2024-11-20 18:57:54.584467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.722 [2024-11-20 18:57:54.584574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.722 [2024-11-20 18:57:54.584576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:32.981 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.981 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:32.981 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.981 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.981 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.241 [2024-11-20 18:57:55.340422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.241 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.241 Malloc1 00:20:33.241 [2024-11-20 18:57:55.449074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.241 Malloc2 00:20:33.241 Malloc3 00:20:33.241 Malloc4 00:20:33.501 Malloc5 00:20:33.501 Malloc6 00:20:33.501 Malloc7 00:20:33.501 Malloc8 00:20:33.501 Malloc9 00:20:33.501 Malloc10 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3699803 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3699803 /var/tmp/bdevperf.sock 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3699803 ']' 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.761 { 00:20:33.761 "params": { 00:20:33.761 "name": "Nvme$subsystem", 00:20:33.761 "trtype": "$TEST_TRANSPORT", 00:20:33.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.761 "adrfam": "ipv4", 00:20:33.761 "trsvcid": "$NVMF_PORT", 00:20:33.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.761 "hdgst": ${hdgst:-false}, 00:20:33.761 "ddgst": ${ddgst:-false} 00:20:33.761 }, 00:20:33.761 "method": "bdev_nvme_attach_controller" 00:20:33.761 } 00:20:33.761 EOF 00:20:33.761 )") 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.761 { 00:20:33.761 "params": { 00:20:33.761 "name": "Nvme$subsystem", 00:20:33.761 "trtype": "$TEST_TRANSPORT", 00:20:33.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.761 "adrfam": "ipv4", 00:20:33.761 "trsvcid": "$NVMF_PORT", 00:20:33.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.761 "hdgst": ${hdgst:-false}, 00:20:33.761 "ddgst": ${ddgst:-false} 00:20:33.761 }, 00:20:33.761 "method": "bdev_nvme_attach_controller" 00:20:33.761 } 00:20:33.761 EOF 00:20:33.761 )") 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.761 { 00:20:33.761 "params": { 00:20:33.761 "name": "Nvme$subsystem", 00:20:33.761 "trtype": "$TEST_TRANSPORT", 00:20:33.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.761 "adrfam": "ipv4", 00:20:33.761 "trsvcid": "$NVMF_PORT", 00:20:33.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.761 "hdgst": ${hdgst:-false}, 00:20:33.761 "ddgst": ${ddgst:-false} 00:20:33.761 }, 00:20:33.761 "method": "bdev_nvme_attach_controller" 00:20:33.761 } 00:20:33.761 EOF 00:20:33.761 )") 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.761 { 00:20:33.761 "params": { 00:20:33.761 "name": "Nvme$subsystem", 00:20:33.761 "trtype": "$TEST_TRANSPORT", 00:20:33.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.761 "adrfam": "ipv4", 00:20:33.761 "trsvcid": "$NVMF_PORT", 00:20:33.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.761 "hdgst": ${hdgst:-false}, 00:20:33.761 "ddgst": ${ddgst:-false} 00:20:33.761 }, 00:20:33.761 "method": "bdev_nvme_attach_controller" 00:20:33.761 } 00:20:33.761 EOF 00:20:33.761 )") 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.761 { 00:20:33.761 "params": { 00:20:33.761 "name": "Nvme$subsystem", 00:20:33.761 "trtype": "$TEST_TRANSPORT", 00:20:33.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.761 "adrfam": "ipv4", 00:20:33.761 "trsvcid": "$NVMF_PORT", 00:20:33.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.761 "hdgst": ${hdgst:-false}, 00:20:33.761 "ddgst": ${ddgst:-false} 00:20:33.761 }, 00:20:33.761 "method": "bdev_nvme_attach_controller" 00:20:33.761 } 00:20:33.761 EOF 00:20:33.761 )") 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.761 { 00:20:33.761 "params": { 00:20:33.761 "name": "Nvme$subsystem", 00:20:33.761 "trtype": "$TEST_TRANSPORT", 00:20:33.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.761 "adrfam": "ipv4", 00:20:33.761 "trsvcid": "$NVMF_PORT", 00:20:33.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.761 "hdgst": ${hdgst:-false}, 00:20:33.761 "ddgst": ${ddgst:-false} 00:20:33.761 }, 00:20:33.761 "method": "bdev_nvme_attach_controller" 00:20:33.761 } 00:20:33.761 EOF 00:20:33.761 )") 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.761 [2024-11-20 18:57:55.924103] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:33.761 [2024-11-20 18:57:55.924154] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.761 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.761 { 00:20:33.761 "params": { 00:20:33.762 "name": "Nvme$subsystem", 00:20:33.762 "trtype": "$TEST_TRANSPORT", 00:20:33.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "$NVMF_PORT", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.762 "hdgst": ${hdgst:-false}, 00:20:33.762 "ddgst": ${ddgst:-false} 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 } 00:20:33.762 EOF 00:20:33.762 )") 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.762 { 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme$subsystem", 00:20:33.762 "trtype": "$TEST_TRANSPORT", 00:20:33.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "$NVMF_PORT", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.762 "hdgst": ${hdgst:-false}, 00:20:33.762 "ddgst": ${ddgst:-false} 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 } 00:20:33.762 EOF 00:20:33.762 )") 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.762 { 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme$subsystem", 00:20:33.762 "trtype": "$TEST_TRANSPORT", 00:20:33.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "$NVMF_PORT", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.762 "hdgst": ${hdgst:-false}, 00:20:33.762 "ddgst": ${ddgst:-false} 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 } 00:20:33.762 EOF 00:20:33.762 )") 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:33.762 { 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme$subsystem", 00:20:33.762 "trtype": "$TEST_TRANSPORT", 00:20:33.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "$NVMF_PORT", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.762 "hdgst": ${hdgst:-false}, 00:20:33.762 "ddgst": ${ddgst:-false} 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 } 00:20:33.762 EOF 00:20:33.762 )") 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:33.762 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme1", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme2", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme3", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme4", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme5", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme6", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme7", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme8", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme9", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 },{ 00:20:33.762 "params": { 00:20:33.762 "name": "Nvme10", 00:20:33.762 "trtype": "tcp", 00:20:33.762 "traddr": "10.0.0.2", 00:20:33.762 "adrfam": "ipv4", 00:20:33.762 "trsvcid": "4420", 00:20:33.762 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:33.762 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:33.762 "hdgst": false, 00:20:33.762 "ddgst": false 00:20:33.762 }, 00:20:33.762 "method": "bdev_nvme_attach_controller" 00:20:33.762 }' 00:20:33.762 [2024-11-20 18:57:55.998162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.762 [2024-11-20 18:57:56.039541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3699803 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:35.666 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:36.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3699803 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3699522 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.603 { 00:20:36.603 "params": { 00:20:36.603 "name": "Nvme$subsystem", 00:20:36.603 "trtype": "$TEST_TRANSPORT", 00:20:36.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.603 "adrfam": "ipv4", 00:20:36.603 "trsvcid": "$NVMF_PORT", 00:20:36.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.603 "hdgst": ${hdgst:-false}, 00:20:36.603 "ddgst": ${ddgst:-false} 00:20:36.603 }, 00:20:36.603 "method": "bdev_nvme_attach_controller" 00:20:36.603 } 00:20:36.603 EOF 00:20:36.603 )") 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.603 { 00:20:36.603 "params": { 00:20:36.603 "name": "Nvme$subsystem", 00:20:36.603 "trtype": "$TEST_TRANSPORT", 00:20:36.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.603 "adrfam": "ipv4", 00:20:36.603 "trsvcid": "$NVMF_PORT", 00:20:36.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.603 "hdgst": ${hdgst:-false}, 00:20:36.603 "ddgst": ${ddgst:-false} 00:20:36.603 }, 00:20:36.603 "method": "bdev_nvme_attach_controller" 00:20:36.603 } 00:20:36.603 EOF 00:20:36.603 )") 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.603 { 00:20:36.603 "params": { 00:20:36.603 "name": "Nvme$subsystem", 00:20:36.603 "trtype": "$TEST_TRANSPORT", 00:20:36.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.603 "adrfam": "ipv4", 00:20:36.603 "trsvcid": "$NVMF_PORT", 00:20:36.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.603 "hdgst": ${hdgst:-false}, 00:20:36.603 "ddgst": ${ddgst:-false} 00:20:36.603 }, 00:20:36.603 "method": "bdev_nvme_attach_controller" 00:20:36.603 } 00:20:36.603 EOF 00:20:36.603 )") 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.603 { 00:20:36.603 "params": { 00:20:36.603 "name": "Nvme$subsystem", 00:20:36.603 "trtype": "$TEST_TRANSPORT", 00:20:36.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.603 "adrfam": "ipv4", 00:20:36.603 "trsvcid": "$NVMF_PORT", 00:20:36.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.603 "hdgst": ${hdgst:-false}, 00:20:36.603 "ddgst": ${ddgst:-false} 00:20:36.603 }, 00:20:36.603 "method": "bdev_nvme_attach_controller" 00:20:36.603 } 00:20:36.603 EOF 00:20:36.603 )") 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.603 { 00:20:36.603 "params": { 00:20:36.603 "name": "Nvme$subsystem", 00:20:36.603 "trtype": "$TEST_TRANSPORT", 00:20:36.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.603 "adrfam": "ipv4", 00:20:36.603 "trsvcid": "$NVMF_PORT", 00:20:36.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.603 "hdgst": ${hdgst:-false}, 00:20:36.603 "ddgst": ${ddgst:-false} 00:20:36.603 }, 00:20:36.603 "method": "bdev_nvme_attach_controller" 00:20:36.603 } 00:20:36.603 EOF 00:20:36.603 )") 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.603 { 00:20:36.603 "params": { 00:20:36.603 "name": "Nvme$subsystem", 00:20:36.603 "trtype": "$TEST_TRANSPORT", 00:20:36.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.603 "adrfam": "ipv4", 00:20:36.603 "trsvcid": "$NVMF_PORT", 00:20:36.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.603 "hdgst": ${hdgst:-false}, 00:20:36.603 "ddgst": ${ddgst:-false} 00:20:36.603 }, 00:20:36.603 "method": "bdev_nvme_attach_controller" 00:20:36.603 } 00:20:36.603 EOF 00:20:36.603 )") 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.603 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.603 { 00:20:36.603 "params": { 00:20:36.603 "name": "Nvme$subsystem", 00:20:36.603 "trtype": "$TEST_TRANSPORT", 00:20:36.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "$NVMF_PORT", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.604 "hdgst": ${hdgst:-false}, 00:20:36.604 "ddgst": ${ddgst:-false} 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 } 00:20:36.604 EOF 00:20:36.604 )") 00:20:36.604 [2024-11-20 18:57:58.849590] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:36.604 [2024-11-20 18:57:58.849641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700289 ] 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.604 { 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme$subsystem", 00:20:36.604 "trtype": "$TEST_TRANSPORT", 00:20:36.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "$NVMF_PORT", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.604 "hdgst": ${hdgst:-false}, 00:20:36.604 "ddgst": ${ddgst:-false} 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 } 00:20:36.604 EOF 00:20:36.604 )") 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.604 { 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme$subsystem", 00:20:36.604 "trtype": "$TEST_TRANSPORT", 00:20:36.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "$NVMF_PORT", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.604 "hdgst": ${hdgst:-false}, 00:20:36.604 "ddgst": ${ddgst:-false} 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 } 00:20:36.604 EOF 00:20:36.604 )") 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.604 { 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme$subsystem", 00:20:36.604 "trtype": "$TEST_TRANSPORT", 00:20:36.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "$NVMF_PORT", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.604 "hdgst": ${hdgst:-false}, 00:20:36.604 "ddgst": ${ddgst:-false} 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 } 00:20:36.604 EOF 00:20:36.604 )") 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:36.604 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme1", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme2", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme3", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme4", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme5", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme6", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme7", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme8", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme9", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 },{ 00:20:36.604 "params": { 00:20:36.604 "name": "Nvme10", 00:20:36.604 "trtype": "tcp", 00:20:36.604 "traddr": "10.0.0.2", 00:20:36.604 "adrfam": "ipv4", 00:20:36.604 "trsvcid": "4420", 00:20:36.604 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.604 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.604 "hdgst": false, 00:20:36.604 "ddgst": false 00:20:36.604 }, 00:20:36.604 "method": "bdev_nvme_attach_controller" 00:20:36.604 }' 00:20:36.604 [2024-11-20 18:57:58.928114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.864 [2024-11-20 18:57:58.969163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.240 Running I/O for 1 seconds... 00:20:39.187 2244.00 IOPS, 140.25 MiB/s 00:20:39.187 Latency(us) 00:20:39.187 [2024-11-20T17:58:01.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.187 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme1n1 : 1.02 250.75 15.67 0.00 0.00 252783.66 16602.45 241671.80 00:20:39.187 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme2n1 : 1.05 243.64 15.23 0.00 0.00 256340.60 16852.11 215707.06 00:20:39.187 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme3n1 : 1.10 293.74 18.36 0.00 0.00 209321.64 3838.54 213709.78 00:20:39.187 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme4n1 : 1.10 291.62 18.23 0.00 0.00 208148.97 21970.16 212711.13 00:20:39.187 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme5n1 : 1.14 280.99 17.56 0.00 0.00 213403.75 15978.30 218702.99 00:20:39.187 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme6n1 : 1.13 283.48 17.72 0.00 0.00 208368.20 17975.59 225693.50 00:20:39.187 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme7n1 : 1.13 282.61 17.66 0.00 0.00 205878.32 13044.78 215707.06 00:20:39.187 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme8n1 : 1.15 333.63 20.85 0.00 0.00 172150.41 11421.99 213709.78 00:20:39.187 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme9n1 : 1.14 280.09 17.51 0.00 0.00 201773.25 17850.76 222697.57 00:20:39.187 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.187 Verification LBA range: start 0x0 length 0x400 00:20:39.187 Nvme10n1 : 1.15 279.17 17.45 0.00 0.00 199532.93 14667.58 229688.08 00:20:39.187 [2024-11-20T17:58:01.512Z] =================================================================================================================== 00:20:39.187 [2024-11-20T17:58:01.512Z] Total : 2819.71 176.23 0.00 0.00 210234.24 3838.54 241671.80 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.447 rmmod nvme_tcp 00:20:39.447 rmmod nvme_fabrics 00:20:39.447 rmmod nvme_keyring 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3699522 ']' 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3699522 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3699522 ']' 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3699522 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699522 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699522' 00:20:39.447 killing process with pid 3699522 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3699522 00:20:39.447 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3699522 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.014 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.919 00:20:41.919 real 0m15.789s 00:20:41.919 user 0m35.904s 00:20:41.919 sys 0m5.902s 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.919 ************************************ 00:20:41.919 END TEST nvmf_shutdown_tc1 00:20:41.919 ************************************ 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:41.919 ************************************ 00:20:41.919 START TEST nvmf_shutdown_tc2 00:20:41.919 ************************************ 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:41.919 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:42.179 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:42.179 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.179 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:42.180 Found net devices under 0000:86:00.0: cvl_0_0 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:42.180 Found net devices under 0000:86:00.1: cvl_0_1 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.180 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.439 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:20:42.440 00:20:42.440 --- 10.0.0.2 ping statistics --- 00:20:42.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.440 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:20:42.440 00:20:42.440 --- 10.0.0.1 ping statistics --- 00:20:42.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.440 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3701314 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3701314 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3701314 ']' 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.440 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.440 [2024-11-20 18:58:04.631478] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:42.440 [2024-11-20 18:58:04.631525] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.440 [2024-11-20 18:58:04.712468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.440 [2024-11-20 18:58:04.754383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.440 [2024-11-20 18:58:04.754421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.440 [2024-11-20 18:58:04.754428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.440 [2024-11-20 18:58:04.754434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.440 [2024-11-20 18:58:04.754439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.440 [2024-11-20 18:58:04.756008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.440 [2024-11-20 18:58:04.756117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.440 [2024-11-20 18:58:04.756242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.440 [2024-11-20 18:58:04.756243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 [2024-11-20 18:58:05.524333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.377 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.378 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.378 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:43.378 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:43.378 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.378 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.378 Malloc1 00:20:43.378 [2024-11-20 18:58:05.628817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.378 Malloc2 00:20:43.378 Malloc3 00:20:43.637 Malloc4 00:20:43.637 Malloc5 00:20:43.637 Malloc6 00:20:43.637 Malloc7 00:20:43.637 Malloc8 00:20:43.637 Malloc9 00:20:43.897 Malloc10 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3701595 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3701595 /var/tmp/bdevperf.sock 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3701595 ']' 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.897 { 00:20:43.897 "params": { 00:20:43.897 "name": "Nvme$subsystem", 00:20:43.897 "trtype": "$TEST_TRANSPORT", 00:20:43.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.897 "adrfam": "ipv4", 00:20:43.897 "trsvcid": "$NVMF_PORT", 00:20:43.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.897 "hdgst": ${hdgst:-false}, 00:20:43.897 "ddgst": ${ddgst:-false} 00:20:43.897 }, 00:20:43.897 "method": "bdev_nvme_attach_controller" 00:20:43.897 } 00:20:43.897 EOF 00:20:43.897 )") 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.897 { 00:20:43.897 "params": { 00:20:43.897 "name": "Nvme$subsystem", 00:20:43.897 "trtype": "$TEST_TRANSPORT", 00:20:43.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.897 "adrfam": "ipv4", 00:20:43.897 "trsvcid": "$NVMF_PORT", 00:20:43.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.897 "hdgst": ${hdgst:-false}, 00:20:43.897 "ddgst": ${ddgst:-false} 00:20:43.897 }, 00:20:43.897 "method": "bdev_nvme_attach_controller" 00:20:43.897 } 00:20:43.897 EOF 00:20:43.897 )") 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.897 { 00:20:43.897 "params": { 00:20:43.897 "name": "Nvme$subsystem", 00:20:43.897 "trtype": "$TEST_TRANSPORT", 00:20:43.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.897 "adrfam": "ipv4", 00:20:43.897 "trsvcid": "$NVMF_PORT", 00:20:43.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.897 "hdgst": ${hdgst:-false}, 00:20:43.897 "ddgst": ${ddgst:-false} 00:20:43.897 }, 00:20:43.897 "method": "bdev_nvme_attach_controller" 00:20:43.897 } 00:20:43.897 EOF 00:20:43.897 )") 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.897 { 00:20:43.897 "params": { 00:20:43.897 "name": "Nvme$subsystem", 00:20:43.897 "trtype": "$TEST_TRANSPORT", 00:20:43.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.897 "adrfam": "ipv4", 00:20:43.897 "trsvcid": "$NVMF_PORT", 00:20:43.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.897 "hdgst": ${hdgst:-false}, 00:20:43.897 "ddgst": ${ddgst:-false} 00:20:43.897 }, 00:20:43.897 "method": "bdev_nvme_attach_controller" 00:20:43.897 } 00:20:43.897 EOF 00:20:43.897 )") 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.897 { 00:20:43.897 "params": { 00:20:43.897 "name": "Nvme$subsystem", 00:20:43.897 "trtype": "$TEST_TRANSPORT", 00:20:43.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.897 "adrfam": "ipv4", 00:20:43.897 "trsvcid": "$NVMF_PORT", 00:20:43.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.897 "hdgst": ${hdgst:-false}, 00:20:43.897 "ddgst": ${ddgst:-false} 00:20:43.897 }, 00:20:43.897 "method": "bdev_nvme_attach_controller" 00:20:43.897 } 00:20:43.897 EOF 00:20:43.897 )") 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.897 { 00:20:43.897 "params": { 00:20:43.897 "name": "Nvme$subsystem", 00:20:43.897 "trtype": "$TEST_TRANSPORT", 00:20:43.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.897 "adrfam": "ipv4", 00:20:43.897 "trsvcid": "$NVMF_PORT", 00:20:43.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.897 "hdgst": ${hdgst:-false}, 00:20:43.897 "ddgst": ${ddgst:-false} 00:20:43.897 }, 00:20:43.897 "method": "bdev_nvme_attach_controller" 00:20:43.897 } 00:20:43.897 EOF 00:20:43.897 )") 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.897 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.897 { 00:20:43.897 "params": { 00:20:43.898 "name": "Nvme$subsystem", 00:20:43.898 "trtype": "$TEST_TRANSPORT", 00:20:43.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "$NVMF_PORT", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.898 "hdgst": ${hdgst:-false}, 00:20:43.898 "ddgst": ${ddgst:-false} 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 } 00:20:43.898 EOF 00:20:43.898 )") 00:20:43.898 [2024-11-20 18:58:06.097717] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:43.898 [2024-11-20 18:58:06.097768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701595 ] 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.898 { 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme$subsystem", 00:20:43.898 "trtype": "$TEST_TRANSPORT", 00:20:43.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "$NVMF_PORT", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.898 "hdgst": ${hdgst:-false}, 00:20:43.898 "ddgst": ${ddgst:-false} 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 } 00:20:43.898 EOF 00:20:43.898 )") 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.898 { 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme$subsystem", 00:20:43.898 "trtype": "$TEST_TRANSPORT", 00:20:43.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "$NVMF_PORT", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.898 "hdgst": ${hdgst:-false}, 00:20:43.898 "ddgst": ${ddgst:-false} 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 } 00:20:43.898 EOF 00:20:43.898 )") 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.898 { 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme$subsystem", 00:20:43.898 "trtype": "$TEST_TRANSPORT", 00:20:43.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "$NVMF_PORT", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.898 "hdgst": ${hdgst:-false}, 00:20:43.898 "ddgst": ${ddgst:-false} 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 } 00:20:43.898 EOF 00:20:43.898 )") 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:43.898 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme1", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme2", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme3", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme4", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme5", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme6", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme7", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme8", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme9", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 },{ 00:20:43.898 "params": { 00:20:43.898 "name": "Nvme10", 00:20:43.898 "trtype": "tcp", 00:20:43.898 "traddr": "10.0.0.2", 00:20:43.898 "adrfam": "ipv4", 00:20:43.898 "trsvcid": "4420", 00:20:43.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:43.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:43.898 "hdgst": false, 00:20:43.898 "ddgst": false 00:20:43.898 }, 00:20:43.898 "method": "bdev_nvme_attach_controller" 00:20:43.898 }' 00:20:43.898 [2024-11-20 18:58:06.173274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.898 [2024-11-20 18:58:06.214074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.276 Running I/O for 10 seconds... 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:45.846 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3701595 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3701595 ']' 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3701595 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701595 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701595' 00:20:46.106 killing process with pid 3701595 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3701595 00:20:46.106 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3701595 00:20:46.365 Received shutdown signal, test time was about 0.894223 seconds 00:20:46.365 00:20:46.365 Latency(us) 00:20:46.365 [2024-11-20T17:58:08.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.365 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.365 Verification LBA range: start 0x0 length 0x400 00:20:46.365 Nvme1n1 : 0.89 287.15 17.95 0.00 0.00 220454.03 17101.78 212711.13 00:20:46.365 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.365 Verification LBA range: start 0x0 length 0x400 00:20:46.365 Nvme2n1 : 0.89 286.49 17.91 0.00 0.00 217045.09 18100.42 218702.99 00:20:46.365 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.365 Verification LBA range: start 0x0 length 0x400 00:20:46.365 Nvme3n1 : 0.87 292.67 18.29 0.00 0.00 208582.22 13793.77 212711.13 00:20:46.366 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.366 Verification LBA range: start 0x0 length 0x400 00:20:46.366 Nvme4n1 : 0.87 306.00 19.12 0.00 0.00 193689.97 6896.88 215707.06 00:20:46.366 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.366 Verification LBA range: start 0x0 length 0x400 00:20:46.366 Nvme5n1 : 0.89 294.86 18.43 0.00 0.00 198719.06 3994.58 205720.62 00:20:46.366 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.366 Verification LBA range: start 0x0 length 0x400 00:20:46.366 Nvme6n1 : 0.88 296.73 18.55 0.00 0.00 193445.95 4119.41 203723.34 00:20:46.366 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.366 Verification LBA range: start 0x0 length 0x400 00:20:46.366 Nvme7n1 : 0.89 288.47 18.03 0.00 0.00 195824.40 12607.88 216705.71 00:20:46.366 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.366 Verification LBA range: start 0x0 length 0x400 00:20:46.366 Nvme8n1 : 0.88 289.81 18.11 0.00 0.00 191449.84 13668.94 220700.28 00:20:46.366 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.366 Verification LBA range: start 0x0 length 0x400 00:20:46.366 Nvme9n1 : 0.86 224.09 14.01 0.00 0.00 241476.10 18849.40 219701.64 00:20:46.366 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.366 Verification LBA range: start 0x0 length 0x400 00:20:46.366 Nvme10n1 : 0.86 222.48 13.90 0.00 0.00 238434.99 18974.23 232684.01 00:20:46.366 [2024-11-20T17:58:08.691Z] =================================================================================================================== 00:20:46.366 [2024-11-20T17:58:08.691Z] Total : 2788.74 174.30 0.00 0.00 208233.23 3994.58 232684.01 00:20:46.366 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3701314 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.745 rmmod nvme_tcp 00:20:47.745 rmmod nvme_fabrics 00:20:47.745 rmmod nvme_keyring 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3701314 ']' 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3701314 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3701314 ']' 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3701314 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701314 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701314' 00:20:47.745 killing process with pid 3701314 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3701314 00:20:47.745 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3701314 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.005 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.914 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:49.914 00:20:49.914 real 0m7.990s 00:20:49.914 user 0m24.241s 00:20:49.914 sys 0m1.346s 00:20:49.914 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.914 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:49.914 ************************************ 00:20:49.914 END TEST nvmf_shutdown_tc2 00:20:49.914 ************************************ 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:50.174 ************************************ 00:20:50.174 START TEST nvmf_shutdown_tc3 00:20:50.174 ************************************ 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.174 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.175 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.175 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.175 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:20:50.474 00:20:50.474 --- 10.0.0.2 ping statistics --- 00:20:50.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.474 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:20:50.474 00:20:50.474 --- 10.0.0.1 ping statistics --- 00:20:50.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.474 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3702820 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3702820 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3702820 ']' 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.474 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.475 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.475 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.475 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.475 [2024-11-20 18:58:12.699242] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:50.475 [2024-11-20 18:58:12.699290] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.775 [2024-11-20 18:58:12.776669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.775 [2024-11-20 18:58:12.819122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.775 [2024-11-20 18:58:12.819155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.775 [2024-11-20 18:58:12.819163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.775 [2024-11-20 18:58:12.819169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.775 [2024-11-20 18:58:12.819174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.775 [2024-11-20 18:58:12.820709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.775 [2024-11-20 18:58:12.820815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.775 [2024-11-20 18:58:12.820921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.775 [2024-11-20 18:58:12.820922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.775 [2024-11-20 18:58:12.970557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.775 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.775 Malloc1 00:20:50.775 [2024-11-20 18:58:13.075372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.091 Malloc2 00:20:51.091 Malloc3 00:20:51.091 Malloc4 00:20:51.091 Malloc5 00:20:51.091 Malloc6 00:20:51.091 Malloc7 00:20:51.091 Malloc8 00:20:51.421 Malloc9 00:20:51.421 Malloc10 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3702926 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3702926 /var/tmp/bdevperf.sock 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3702926 ']' 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.421 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 [2024-11-20 18:58:13.543384] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 [2024-11-20 18:58:13.543435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702926 ] 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.422 { 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme$subsystem", 00:20:51.422 "trtype": "$TEST_TRANSPORT", 00:20:51.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.422 "adrfam": "ipv4", 00:20:51.422 "trsvcid": "$NVMF_PORT", 00:20:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.422 "hdgst": ${hdgst:-false}, 00:20:51.422 "ddgst": ${ddgst:-false} 00:20:51.422 }, 00:20:51.422 "method": "bdev_nvme_attach_controller" 00:20:51.422 } 00:20:51.422 EOF 00:20:51.422 )") 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:51.422 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:51.422 "params": { 00:20:51.422 "name": "Nvme1", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme2", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme3", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme4", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme5", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme6", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme7", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme8", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme9", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 },{ 00:20:51.423 "params": { 00:20:51.423 "name": "Nvme10", 00:20:51.423 "trtype": "tcp", 00:20:51.423 "traddr": "10.0.0.2", 00:20:51.423 "adrfam": "ipv4", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:51.423 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:51.423 "hdgst": false, 00:20:51.423 "ddgst": false 00:20:51.423 }, 00:20:51.423 "method": "bdev_nvme_attach_controller" 00:20:51.423 }' 00:20:51.423 [2024-11-20 18:58:13.618601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.423 [2024-11-20 18:58:13.659430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.328 Running I/O for 10 seconds... 00:20:53.328 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.328 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:53.328 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:53.328 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.328 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.328 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:53.329 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3702820 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3702820 ']' 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3702820 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3702820 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3702820' 00:20:53.596 killing process with pid 3702820 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3702820 00:20:53.596 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3702820 00:20:53.596 [2024-11-20 18:58:15.880230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.596 [2024-11-20 18:58:15.880669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.880675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.880681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a850 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.881996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.882216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d400 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.597 [2024-11-20 18:58:15.884533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.884808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b1f0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.885996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.598 [2024-11-20 18:58:15.886175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.886310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b6e0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.887998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.888305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c0a0 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.889383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.889398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.889405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.889412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.599 [2024-11-20 18:58:15.889419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.889780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c570 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.600 [2024-11-20 18:58:15.890685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.890980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ca40 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.601 [2024-11-20 18:58:15.891597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with t[2024-11-20 18:58:15.891610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:53.601 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.601 [2024-11-20 18:58:15.891621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.601 [2024-11-20 18:58:15.891635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.601 [2024-11-20 18:58:15.891642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.601 [2024-11-20 18:58:15.891654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.601 [2024-11-20 18:58:15.891662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.601 [2024-11-20 18:58:15.891673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.601 [2024-11-20 18:58:15.891680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.601 [2024-11-20 18:58:15.891688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.601 [2024-11-20 18:58:15.891696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.601 [2024-11-20 18:58:15.891703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 18:58:15.891710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.601 he state(6) to be set 00:20:53.601 [2024-11-20 18:58:15.891719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 18:58:15.891733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 he state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with t[2024-11-20 18:58:15.891809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:1he state(6) to be set 00:20:53.602 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with t[2024-11-20 18:58:15.891819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:53.602 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with t[2024-11-20 18:58:15.891862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:1he state(6) to be set 00:20:53.602 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.602 [2024-11-20 18:58:15.891907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with t[2024-11-20 18:58:15.891907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:53.602 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.891984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.891994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.602 [2024-11-20 18:58:15.892162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.602 [2024-11-20 18:58:15.892168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.603 [2024-11-20 18:58:15.892623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:53.603 [2024-11-20 18:58:15.892969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.892989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.892998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.893004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.893012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.893019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.893026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.893033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.893040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a7510 is same with the state(6) to be set 00:20:53.603 [2024-11-20 18:58:15.893076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.893086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.893094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.893101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.893108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.893114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.893121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.603 [2024-11-20 18:58:15.893128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.603 [2024-11-20 18:58:15.893134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106f110 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc381e0 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc432c0 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066e80 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37fe0 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc441b0 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58610 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.604 [2024-11-20 18:58:15.893698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066ca0 is same with the state(6) to be set 00:20:53.604 [2024-11-20 18:58:15.893793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.604 [2024-11-20 18:58:15.893804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.604 [2024-11-20 18:58:15.893823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.604 [2024-11-20 18:58:15.893838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.604 [2024-11-20 18:58:15.893853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.604 [2024-11-20 18:58:15.893861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.604 [2024-11-20 18:58:15.893868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.893988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.893994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.894306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.894313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.899940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.899996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.900003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.900010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.900016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.900021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.900027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.900034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cf10 is same with the state(6) to be set 00:20:53.605 [2024-11-20 18:58:15.908213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.908229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.908242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.908251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.605 [2024-11-20 18:58:15.908262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.605 [2024-11-20 18:58:15.908271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.606 [2024-11-20 18:58:15.908838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.908849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49150 is same with the state(6) to be set 00:20:53.606 [2024-11-20 18:58:15.910496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:53.606 [2024-11-20 18:58:15.910544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106f110 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a7510 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.606 [2024-11-20 18:58:15.910625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.910635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.606 [2024-11-20 18:58:15.910645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.910655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.606 [2024-11-20 18:58:15.910663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.910674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.606 [2024-11-20 18:58:15.910682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.606 [2024-11-20 18:58:15.910692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097140 is same with the state(6) to be set 00:20:53.606 [2024-11-20 18:58:15.910715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc381e0 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc432c0 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066e80 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc37fe0 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc441b0 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb58610 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.910824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066ca0 (9): Bad file descriptor 00:20:53.606 [2024-11-20 18:58:15.912564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:53.606 [2024-11-20 18:58:15.913457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.606 [2024-11-20 18:58:15.913487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x106f110 with addr=10.0.0.2, port=4420 00:20:53.606 [2024-11-20 18:58:15.913500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106f110 is same with the state(6) to be set 00:20:53.606 [2024-11-20 18:58:15.913572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.606 [2024-11-20 18:58:15.913585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc381e0 with addr=10.0.0.2, port=4420 00:20:53.606 [2024-11-20 18:58:15.913596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc381e0 is same with the state(6) to be set 00:20:53.607 [2024-11-20 18:58:15.913988] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.607 [2024-11-20 18:58:15.914046] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.607 [2024-11-20 18:58:15.914114] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.607 [2024-11-20 18:58:15.914168] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.876 [2024-11-20 18:58:15.914228] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.876 [2024-11-20 18:58:15.914339] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.876 [2024-11-20 18:58:15.914359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106f110 (9): Bad file descriptor 00:20:53.876 [2024-11-20 18:58:15.914375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc381e0 (9): Bad file descriptor 00:20:53.876 [2024-11-20 18:58:15.914427] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.876 [2024-11-20 18:58:15.914502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.876 [2024-11-20 18:58:15.914951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.876 [2024-11-20 18:58:15.914962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.914972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.914984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.914996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.877 [2024-11-20 18:58:15.915818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-11-20 18:58:15.915830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.915842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.915852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.915863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.915873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.915884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.915894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.915905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049890 is same with the state(6) to be set 00:20:53.878 [2024-11-20 18:58:15.916064] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:53.878 [2024-11-20 18:58:15.916099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:53.878 [2024-11-20 18:58:15.916111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:53.878 [2024-11-20 18:58:15.916123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:53.878 [2024-11-20 18:58:15.916134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:53.878 [2024-11-20 18:58:15.916144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:53.878 [2024-11-20 18:58:15.916153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:53.878 [2024-11-20 18:58:15.916162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:53.878 [2024-11-20 18:58:15.916171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:53.878 [2024-11-20 18:58:15.917527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:53.878 [2024-11-20 18:58:15.917818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.878 [2024-11-20 18:58:15.917840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb58610 with addr=10.0.0.2, port=4420 00:20:53.878 [2024-11-20 18:58:15.917851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58610 is same with the state(6) to be set 00:20:53.878 [2024-11-20 18:58:15.918200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb58610 (9): Bad file descriptor 00:20:53.878 [2024-11-20 18:58:15.918265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:53.878 [2024-11-20 18:58:15.918278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:53.878 [2024-11-20 18:58:15.918289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:53.878 [2024-11-20 18:58:15.918299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:53.878 [2024-11-20 18:58:15.920537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097140 (9): Bad file descriptor 00:20:53.878 [2024-11-20 18:58:15.920705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.920983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.920993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-11-20 18:58:15.921316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.878 [2024-11-20 18:58:15.921329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.921984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.921994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.922005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.922015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.922028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.922039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.922051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.922061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.922073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.922083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.922094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.922103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.922117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.922126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.922137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe481a0 is same with the state(6) to be set 00:20:53.879 [2024-11-20 18:58:15.923496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.923514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.923529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.879 [2024-11-20 18:58:15.923539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.879 [2024-11-20 18:58:15.923553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.923988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.923999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.880 [2024-11-20 18:58:15.924286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.880 [2024-11-20 18:58:15.924299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.924940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.924950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a100 is same with the state(6) to be set 00:20:53.881 [2024-11-20 18:58:15.926115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.881 [2024-11-20 18:58:15.926301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.881 [2024-11-20 18:58:15.926311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.882 [2024-11-20 18:58:15.926932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.882 [2024-11-20 18:58:15.926940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.926948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.926955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.926963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.926971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.926979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.926985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.926994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.927157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.927164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1047300 is same with the state(6) to be set 00:20:53.883 [2024-11-20 18:58:15.928150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.883 [2024-11-20 18:58:15.928564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.883 [2024-11-20 18:58:15.928571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.928989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.928996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.884 [2024-11-20 18:58:15.929186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.884 [2024-11-20 18:58:15.929194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104aba0 is same with the state(6) to be set 00:20:53.885 [2024-11-20 18:58:15.930177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.885 [2024-11-20 18:58:15.930738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.885 [2024-11-20 18:58:15.930748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.930990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.930999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.931220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.931229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93710 is same with the state(6) to be set 00:20:53.886 [2024-11-20 18:58:15.932220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.886 [2024-11-20 18:58:15.932372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.886 [2024-11-20 18:58:15.932380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.932986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.932993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.887 [2024-11-20 18:58:15.933002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.887 [2024-11-20 18:58:15.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.933256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.933264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecfec0 is same with the state(6) to be set 00:20:53.888 [2024-11-20 18:58:15.934222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:53.888 [2024-11-20 18:58:15.934239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:53.888 [2024-11-20 18:58:15.934251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:53.888 [2024-11-20 18:58:15.934261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:53.888 [2024-11-20 18:58:15.934325] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:53.888 [2024-11-20 18:58:15.934350] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:53.888 [2024-11-20 18:58:15.934417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:53.888 [2024-11-20 18:58:15.934430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:53.888 [2024-11-20 18:58:15.934726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.888 [2024-11-20 18:58:15.934741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc441b0 with addr=10.0.0.2, port=4420 00:20:53.888 [2024-11-20 18:58:15.934750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc441b0 is same with the state(6) to be set 00:20:53.888 [2024-11-20 18:58:15.934882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.888 [2024-11-20 18:58:15.934892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc432c0 with addr=10.0.0.2, port=4420 00:20:53.888 [2024-11-20 18:58:15.934900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc432c0 is same with the state(6) to be set 00:20:53.888 [2024-11-20 18:58:15.935062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.888 [2024-11-20 18:58:15.935072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc37fe0 with addr=10.0.0.2, port=4420 00:20:53.888 [2024-11-20 18:58:15.935080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37fe0 is same with the state(6) to be set 00:20:53.888 [2024-11-20 18:58:15.935208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.888 [2024-11-20 18:58:15.935218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1066e80 with addr=10.0.0.2, port=4420 00:20:53.888 [2024-11-20 18:58:15.935226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066e80 is same with the state(6) to be set 00:20:53.888 [2024-11-20 18:58:15.936389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.888 [2024-11-20 18:58:15.936646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.888 [2024-11-20 18:58:15.936655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.936991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.936998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.889 [2024-11-20 18:58:15.937214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.889 [2024-11-20 18:58:15.937224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.890 [2024-11-20 18:58:15.937439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.890 [2024-11-20 18:58:15.937446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecec80 is same with the state(6) to be set 00:20:53.890 [2024-11-20 18:58:15.938619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:53.890 [2024-11-20 18:58:15.938639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:53.890 [2024-11-20 18:58:15.938649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:53.890 task offset: 16384 on job bdev=Nvme5n1 fails 00:20:53.890 00:20:53.890 Latency(us) 00:20:53.890 [2024-11-20T17:58:16.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.890 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme1n1 ended in about 0.65 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme1n1 : 0.65 197.94 12.37 98.97 0.00 212381.66 15915.89 206719.27 00:20:53.890 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme2n1 ended in about 0.64 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme2n1 : 0.64 201.46 12.59 100.73 0.00 203402.48 20097.71 219701.64 00:20:53.890 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme3n1 ended in about 0.65 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme3n1 : 0.65 203.25 12.70 98.54 0.00 198761.64 21970.16 205720.62 00:20:53.890 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme4n1 ended in about 0.65 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme4n1 : 0.65 196.45 12.28 98.22 0.00 198501.59 22968.81 204721.98 00:20:53.890 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme5n1 ended in about 0.63 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme5n1 : 0.63 202.04 12.63 101.02 0.00 187267.17 26713.72 205720.62 00:20:53.890 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme6n1 ended in about 0.64 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme6n1 : 0.64 199.77 12.49 99.88 0.00 184487.50 4213.03 209715.20 00:20:53.890 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme7n1 ended in about 0.65 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme7n1 : 0.65 195.84 12.24 97.92 0.00 183699.42 16727.28 204721.98 00:20:53.890 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme8n1 ended in about 0.66 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme8n1 : 0.66 201.34 12.58 97.62 0.00 175548.91 16352.79 196732.83 00:20:53.890 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme9n1 ended in about 0.66 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme9n1 : 0.66 96.70 6.04 96.70 0.00 264275.38 18849.40 249660.95 00:20:53.890 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.890 Job: Nvme10n1 ended in about 0.66 seconds with error 00:20:53.890 Verification LBA range: start 0x0 length 0x400 00:20:53.890 Nvme10n1 : 0.66 97.32 6.08 97.32 0.00 254445.96 19348.72 242670.45 00:20:53.890 [2024-11-20T17:58:16.215Z] =================================================================================================================== 00:20:53.890 [2024-11-20T17:58:16.215Z] Total : 1792.11 112.01 986.93 0.00 202417.36 4213.03 249660.95 00:20:53.890 [2024-11-20 18:58:15.970812] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:53.890 [2024-11-20 18:58:15.970857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:53.890 [2024-11-20 18:58:15.971165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.890 [2024-11-20 18:58:15.971185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1066ca0 with addr=10.0.0.2, port=4420 00:20:53.890 [2024-11-20 18:58:15.971196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066ca0 is same with the state(6) to be set 00:20:53.890 [2024-11-20 18:58:15.971368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.890 [2024-11-20 18:58:15.971381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10a7510 with addr=10.0.0.2, port=4420 00:20:53.890 [2024-11-20 18:58:15.971389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a7510 is same with the state(6) to be set 00:20:53.890 [2024-11-20 18:58:15.971403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc441b0 (9): Bad file descriptor 00:20:53.890 [2024-11-20 18:58:15.971415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc432c0 (9): Bad file descriptor 00:20:53.890 [2024-11-20 18:58:15.971425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc37fe0 (9): Bad file descriptor 00:20:53.890 [2024-11-20 18:58:15.971434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066e80 (9): Bad file descriptor 00:20:53.890 [2024-11-20 18:58:15.971725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.890 [2024-11-20 18:58:15.971748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc381e0 with addr=10.0.0.2, port=4420 00:20:53.890 [2024-11-20 18:58:15.971756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc381e0 is same with the state(6) to be set 00:20:53.890 [2024-11-20 18:58:15.971979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.890 [2024-11-20 18:58:15.971992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x106f110 with addr=10.0.0.2, port=4420 00:20:53.890 [2024-11-20 18:58:15.972000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106f110 is same with the state(6) to be set 00:20:53.890 [2024-11-20 18:58:15.972216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.890 [2024-11-20 18:58:15.972229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb58610 with addr=10.0.0.2, port=4420 00:20:53.890 [2024-11-20 18:58:15.972237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58610 is same with the state(6) to be set 00:20:53.890 [2024-11-20 18:58:15.972377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.890 [2024-11-20 18:58:15.972387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1097140 with addr=10.0.0.2, port=4420 00:20:53.890 [2024-11-20 18:58:15.972394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097140 is same with the state(6) to be set 00:20:53.890 [2024-11-20 18:58:15.972405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066ca0 (9): Bad file descriptor 00:20:53.890 [2024-11-20 18:58:15.972415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a7510 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.972424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.972430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.972438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.972447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.972455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.972462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.972468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.972475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.972482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.972488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.972494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.972500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.972506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.972513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.972520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.972526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.972575] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:53.891 [2024-11-20 18:58:15.972587] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:53.891 [2024-11-20 18:58:15.972926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc381e0 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.972939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106f110 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.972948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb58610 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.972957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097140 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.972966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.972972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.972978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.972985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.972992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.972999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:53.891 [2024-11-20 18:58:15.973058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:53.891 [2024-11-20 18:58:15.973066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:53.891 [2024-11-20 18:58:15.973075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:53.891 [2024-11-20 18:58:15.973102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.891 [2024-11-20 18:58:15.973348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1066e80 with addr=10.0.0.2, port=4420 00:20:53.891 [2024-11-20 18:58:15.973356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066e80 is same with the state(6) to be set 00:20:53.891 [2024-11-20 18:58:15.973475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.891 [2024-11-20 18:58:15.973486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc37fe0 with addr=10.0.0.2, port=4420 00:20:53.891 [2024-11-20 18:58:15.973494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37fe0 is same with the state(6) to be set 00:20:53.891 [2024-11-20 18:58:15.973659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.891 [2024-11-20 18:58:15.973669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc432c0 with addr=10.0.0.2, port=4420 00:20:53.891 [2024-11-20 18:58:15.973677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc432c0 is same with the state(6) to be set 00:20:53.891 [2024-11-20 18:58:15.973755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.891 [2024-11-20 18:58:15.973765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc441b0 with addr=10.0.0.2, port=4420 00:20:53.891 [2024-11-20 18:58:15.973773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc441b0 is same with the state(6) to be set 00:20:53.891 [2024-11-20 18:58:15.973801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066e80 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.973812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc37fe0 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.973822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc432c0 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.973830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc441b0 (9): Bad file descriptor 00:20:53.891 [2024-11-20 18:58:15.973855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:53.891 [2024-11-20 18:58:15.973941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:53.891 [2024-11-20 18:58:15.973948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:53.891 [2024-11-20 18:58:15.973955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:53.891 [2024-11-20 18:58:15.973961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:54.151 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3702926 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3702926 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3702926 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.093 rmmod nvme_tcp 00:20:55.093 rmmod nvme_fabrics 00:20:55.093 rmmod nvme_keyring 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3702820 ']' 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3702820 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3702820 ']' 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3702820 00:20:55.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3702820) - No such process 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3702820 is not found' 00:20:55.093 Process with pid 3702820 is not found 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.093 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.094 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.638 00:20:57.638 real 0m7.132s 00:20:57.638 user 0m16.319s 00:20:57.638 sys 0m1.262s 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.638 ************************************ 00:20:57.638 END TEST nvmf_shutdown_tc3 00:20:57.638 ************************************ 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.638 ************************************ 00:20:57.638 START TEST nvmf_shutdown_tc4 00:20:57.638 ************************************ 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.638 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.639 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.639 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.639 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.639 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:20:57.639 00:20:57.639 --- 10.0.0.2 ping statistics --- 00:20:57.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.639 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:20:57.639 00:20:57.639 --- 10.0.0.1 ping statistics --- 00:20:57.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.639 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3704172 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3704172 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3704172 ']' 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.639 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.640 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:57.640 [2024-11-20 18:58:19.897023] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:20:57.640 [2024-11-20 18:58:19.897069] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.898 [2024-11-20 18:58:19.976591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.898 [2024-11-20 18:58:20.023321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.898 [2024-11-20 18:58:20.023357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.898 [2024-11-20 18:58:20.023365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.898 [2024-11-20 18:58:20.023371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.898 [2024-11-20 18:58:20.023376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.898 [2024-11-20 18:58:20.024849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.898 [2024-11-20 18:58:20.024955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.898 [2024-11-20 18:58:20.025060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.898 [2024-11-20 18:58:20.025062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:58.465 [2024-11-20 18:58:20.764956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.465 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.724 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.724 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.724 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.724 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.724 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.724 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.725 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:58.725 Malloc1 00:20:58.725 [2024-11-20 18:58:20.868836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.725 Malloc2 00:20:58.725 Malloc3 00:20:58.725 Malloc4 00:20:58.725 Malloc5 00:20:58.983 Malloc6 00:20:58.983 Malloc7 00:20:58.983 Malloc8 00:20:58.983 Malloc9 00:20:58.983 Malloc10 00:20:58.983 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.983 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:58.983 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.983 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:58.983 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3704461 00:20:58.983 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:58.983 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:59.241 [2024-11-20 18:58:21.380924] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3704172 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3704172 ']' 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3704172 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704172 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704172' 00:21:04.520 killing process with pid 3704172 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3704172 00:21:04.520 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3704172 00:21:04.520 [2024-11-20 18:58:26.371892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37810 is same with the state(6) to be set 00:21:04.520 [2024-11-20 18:58:26.371944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37810 is same with the state(6) to be set 00:21:04.520 [2024-11-20 18:58:26.371951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37810 is same with the state(6) to be set 00:21:04.520 [2024-11-20 18:58:26.371958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37810 is same with the state(6) to be set 00:21:04.520 [2024-11-20 18:58:26.371964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37810 is same with the state(6) to be set 00:21:04.520 [2024-11-20 18:58:26.371970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37810 is same with the state(6) to be set 00:21:04.520 [2024-11-20 18:58:26.372483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.520 [2024-11-20 18:58:26.372512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.372571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37b90 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.373346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37f10 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.373372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37f10 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.373379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37f10 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.373388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37f10 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.373394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37f10 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.373400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37f10 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.373407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37f10 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.374382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0fe0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.374407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0fe0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.374415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0fe0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.374422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0fe0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.374429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0fe0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.374435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0fe0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.375960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2df0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.375983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2df0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.375990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2df0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.375996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2df0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.376003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2df0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.376010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2df0 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.376016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2df0 is same with the state(6) to be set 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 [2024-11-20 18:58:26.376652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2450 is same with the state(6) to be set 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 [2024-11-20 18:58:26.376673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2450 is same with the state(6) to be set 00:21:04.521 starting I/O failed: -6 00:21:04.521 [2024-11-20 18:58:26.376681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2450 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.376688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2450 is same with the state(6) to be set 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 [2024-11-20 18:58:26.376869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 starting I/O failed: -6 00:21:04.521 Write completed with error (sct=0, sc=8) 00:21:04.521 [2024-11-20 18:58:26.377863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.521 NVMe io qpair process completion error 00:21:04.521 [2024-11-20 18:58:26.379221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.379243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.379251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.379258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.379265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.379272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.379278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.379284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0330 is same with the state(6) to be set 00:21:04.521 [2024-11-20 18:58:26.380142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0d10 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.380166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0d10 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.380174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0d10 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.380181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0d10 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.380188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0d10 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.380194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0d10 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3ca0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3ca0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3ca0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3ca0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3ca0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4190 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4190 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4190 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4190 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4190 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.381729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4190 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4660 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4660 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4660 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4660 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4660 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4660 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc37d0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc37d0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc37d0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc37d0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc37d0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.382887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc37d0 is same with the state(6) to be set 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 [2024-11-20 18:58:26.383533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.522 starting I/O failed: -6 00:21:04.522 starting I/O failed: -6 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 [2024-11-20 18:58:26.384269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc10b0 is same with the state(6) to be set 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 [2024-11-20 18:58:26.384290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc10b0 is same with the state(6) to be set 00:21:04.522 starting I/O failed: -6 00:21:04.522 [2024-11-20 18:58:26.384298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc10b0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.384305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc10b0 is same with the state(6) to be set 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 [2024-11-20 18:58:26.384312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc10b0 is same with the state(6) to be set 00:21:04.522 [2024-11-20 18:58:26.384319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc10b0 is same with the state(6) to be set 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 [2024-11-20 18:58:26.384465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.522 starting I/O failed: -6 00:21:04.522 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 [2024-11-20 18:58:26.385444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 [2024-11-20 18:58:26.387019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.523 NVMe io qpair process completion error 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 starting I/O failed: -6 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.523 [2024-11-20 18:58:26.387632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5000 is same with the state(6) to be set 00:21:04.523 Write completed with error (sct=0, sc=8) 00:21:04.524 [2024-11-20 18:58:26.387653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5000 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 [2024-11-20 18:58:26.387661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5000 is same with the state(6) to be set 00:21:04.524 starting I/O failed: -6 00:21:04.524 [2024-11-20 18:58:26.387668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5000 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.387675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5000 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 [2024-11-20 18:58:26.387681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5000 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.387688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5000 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 [2024-11-20 18:58:26.387945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.524 [2024-11-20 18:58:26.387981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc54f0 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.387999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc54f0 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc54f0 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc54f0 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc54f0 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc54f0 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 [2024-11-20 18:58:26.388425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc59c0 is same with tstarting I/O failed: -6 00:21:04.524 he state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc59c0 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 [2024-11-20 18:58:26.388451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc59c0 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc59c0 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 [2024-11-20 18:58:26.388465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc59c0 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc59c0 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 [2024-11-20 18:58:26.388746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b30 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b30 is same with the state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 [2024-11-20 18:58:26.388773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b30 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b30 is same with the state(6) to be set 00:21:04.524 [2024-11-20 18:58:26.388787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b30 is same with tWrite completed with error (sct=0, sc=8) 00:21:04.524 he state(6) to be set 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 [2024-11-20 18:58:26.388843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.524 Write completed with error (sct=0, sc=8) 00:21:04.524 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 [2024-11-20 18:58:26.389826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 [2024-11-20 18:58:26.391507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.525 NVMe io qpair process completion error 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 starting I/O failed: -6 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.525 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 [2024-11-20 18:58:26.392490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 [2024-11-20 18:58:26.393425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 [2024-11-20 18:58:26.394420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.526 starting I/O failed: -6 00:21:04.526 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 [2024-11-20 18:58:26.396220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.527 NVMe io qpair process completion error 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 [2024-11-20 18:58:26.397330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 [2024-11-20 18:58:26.398216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.527 starting I/O failed: -6 00:21:04.527 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 [2024-11-20 18:58:26.399196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 [2024-11-20 18:58:26.400954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.528 NVMe io qpair process completion error 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 starting I/O failed: -6 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.528 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 [2024-11-20 18:58:26.401959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 [2024-11-20 18:58:26.402828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 [2024-11-20 18:58:26.403862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.529 Write completed with error (sct=0, sc=8) 00:21:04.529 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 [2024-11-20 18:58:26.406806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.530 NVMe io qpair process completion error 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 [2024-11-20 18:58:26.408125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 starting I/O failed: -6 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.530 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 [2024-11-20 18:58:26.409023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 [2024-11-20 18:58:26.410018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.531 Write completed with error (sct=0, sc=8) 00:21:04.531 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 [2024-11-20 18:58:26.412505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.532 NVMe io qpair process completion error 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 [2024-11-20 18:58:26.413472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 [2024-11-20 18:58:26.414375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.532 [2024-11-20 18:58:26.415396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.532 Write completed with error (sct=0, sc=8) 00:21:04.532 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 starting I/O failed: -6 00:21:04.533 [2024-11-20 18:58:26.417377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.533 NVMe io qpair process completion error 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.533 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 [2024-11-20 18:58:26.421455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 [2024-11-20 18:58:26.422581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.534 starting I/O failed: -6 00:21:04.534 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 [2024-11-20 18:58:26.425384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.535 NVMe io qpair process completion error 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 [2024-11-20 18:58:26.426533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 starting I/O failed: -6 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.535 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 [2024-11-20 18:58:26.427417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 [2024-11-20 18:58:26.428403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.536 Write completed with error (sct=0, sc=8) 00:21:04.536 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 Write completed with error (sct=0, sc=8) 00:21:04.537 starting I/O failed: -6 00:21:04.537 [2024-11-20 18:58:26.432962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.537 NVMe io qpair process completion error 00:21:04.537 Initializing NVMe Controllers 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:04.537 Controller IO queue size 128, less than required. 00:21:04.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:04.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:04.537 Initialization complete. Launching workers. 00:21:04.537 ======================================================== 00:21:04.537 Latency(us) 00:21:04.537 Device Information : IOPS MiB/s Average min max 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2233.33 95.96 57317.96 904.98 100969.64 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2182.53 93.78 58684.83 926.31 113169.36 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2158.44 92.75 59361.76 688.06 111884.85 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2210.32 94.97 57844.54 643.50 110853.03 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2197.73 94.43 58298.94 1139.04 111081.03 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2226.82 95.68 57565.31 924.16 114248.48 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2221.61 95.46 57118.40 692.55 107448.66 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2212.06 95.05 57373.93 723.81 107190.57 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2185.14 93.89 58092.87 870.95 105633.58 00:21:04.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2190.78 94.14 57957.33 732.40 105117.13 00:21:04.537 ======================================================== 00:21:04.537 Total : 22018.74 946.12 57955.59 643.50 114248.48 00:21:04.537 00:21:04.537 [2024-11-20 18:58:26.435972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cf720 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cd560 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cdbc0 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cf900 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cd890 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ce740 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cfae0 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cdef0 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cea70 is same with the state(6) to be set 00:21:04.537 [2024-11-20 18:58:26.436256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ce410 is same with the state(6) to be set 00:21:04.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:04.537 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3704461 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3704461 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3704461 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.474 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.474 rmmod nvme_tcp 00:21:05.474 rmmod nvme_fabrics 00:21:05.732 rmmod nvme_keyring 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3704172 ']' 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3704172 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3704172 ']' 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3704172 00:21:05.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3704172) - No such process 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3704172 is not found' 00:21:05.732 Process with pid 3704172 is not found 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.732 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.640 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.640 00:21:07.640 real 0m10.381s 00:21:07.640 user 0m27.508s 00:21:07.640 sys 0m5.178s 00:21:07.640 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.640 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 ************************************ 00:21:07.640 END TEST nvmf_shutdown_tc4 00:21:07.640 ************************************ 00:21:07.640 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:07.640 00:21:07.640 real 0m41.808s 00:21:07.640 user 1m44.204s 00:21:07.640 sys 0m14.007s 00:21:07.640 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.640 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 ************************************ 00:21:07.640 END TEST nvmf_shutdown 00:21:07.640 ************************************ 00:21:07.900 18:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:07.900 18:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:07.900 18:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.900 18:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:07.900 ************************************ 00:21:07.900 START TEST nvmf_nsid 00:21:07.900 ************************************ 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:07.900 * Looking for test storage... 00:21:07.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.900 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.901 --rc genhtml_branch_coverage=1 00:21:07.901 --rc genhtml_function_coverage=1 00:21:07.901 --rc genhtml_legend=1 00:21:07.901 --rc geninfo_all_blocks=1 00:21:07.901 --rc geninfo_unexecuted_blocks=1 00:21:07.901 00:21:07.901 ' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.901 --rc genhtml_branch_coverage=1 00:21:07.901 --rc genhtml_function_coverage=1 00:21:07.901 --rc genhtml_legend=1 00:21:07.901 --rc geninfo_all_blocks=1 00:21:07.901 --rc geninfo_unexecuted_blocks=1 00:21:07.901 00:21:07.901 ' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.901 --rc genhtml_branch_coverage=1 00:21:07.901 --rc genhtml_function_coverage=1 00:21:07.901 --rc genhtml_legend=1 00:21:07.901 --rc geninfo_all_blocks=1 00:21:07.901 --rc geninfo_unexecuted_blocks=1 00:21:07.901 00:21:07.901 ' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.901 --rc genhtml_branch_coverage=1 00:21:07.901 --rc genhtml_function_coverage=1 00:21:07.901 --rc genhtml_legend=1 00:21:07.901 --rc geninfo_all_blocks=1 00:21:07.901 --rc geninfo_unexecuted_blocks=1 00:21:07.901 00:21:07.901 ' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.901 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.902 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.902 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.902 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.902 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.902 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.902 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.161 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.161 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.161 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:14.731 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.732 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.732 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.732 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.732 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.732 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:14.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:21:14.732 00:21:14.732 --- 10.0.0.2 ping statistics --- 00:21:14.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.732 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:21:14.732 00:21:14.732 --- 10.0.0.1 ping statistics --- 00:21:14.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.732 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3708940 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3708940 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3708940 ']' 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:14.732 [2024-11-20 18:58:36.208109] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:14.732 [2024-11-20 18:58:36.208154] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.732 [2024-11-20 18:58:36.286371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.732 [2024-11-20 18:58:36.324761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.732 [2024-11-20 18:58:36.324799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.732 [2024-11-20 18:58:36.324806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.732 [2024-11-20 18:58:36.324811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.732 [2024-11-20 18:58:36.324817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.732 [2024-11-20 18:58:36.325371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.732 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3708959 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3f48a7d6-c70d-4545-95a9-5b728d6ce5cd 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=69550309-2423-490f-9ae4-681b121bd927 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=dad164ce-f079-4ff0-9848-e6b44a76c00f 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:14.733 null0 00:21:14.733 null1 00:21:14.733 [2024-11-20 18:58:36.514481] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:14.733 [2024-11-20 18:58:36.514522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708959 ] 00:21:14.733 null2 00:21:14.733 [2024-11-20 18:58:36.521263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.733 [2024-11-20 18:58:36.545466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3708959 /var/tmp/tgt2.sock 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3708959 ']' 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:14.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:14.733 [2024-11-20 18:58:36.590020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.733 [2024-11-20 18:58:36.631251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:14.733 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:14.992 [2024-11-20 18:58:37.170854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.992 [2024-11-20 18:58:37.186959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:14.992 nvme0n1 nvme0n2 00:21:14.992 nvme1n1 00:21:14.992 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:14.992 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:14.992 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:16.368 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3f48a7d6-c70d-4545-95a9-5b728d6ce5cd 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3f48a7d6c70d454595a95b728d6ce5cd 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3F48A7D6C70D454595A95B728D6CE5CD 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3F48A7D6C70D454595A95B728D6CE5CD == \3\F\4\8\A\7\D\6\C\7\0\D\4\5\4\5\9\5\A\9\5\B\7\2\8\D\6\C\E\5\C\D ]] 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:17.305 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 69550309-2423-490f-9ae4-681b121bd927 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=695503092423490f9ae4681b121bd927 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 695503092423490F9AE4681B121BD927 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 695503092423490F9AE4681B121BD927 == \6\9\5\5\0\3\0\9\2\4\2\3\4\9\0\F\9\A\E\4\6\8\1\B\1\2\1\B\D\9\2\7 ]] 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid dad164ce-f079-4ff0-9848-e6b44a76c00f 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dad164cef0794ff09848e6b44a76c00f 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DAD164CEF0794FF09848E6B44A76C00F 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DAD164CEF0794FF09848E6B44A76C00F == \D\A\D\1\6\4\C\E\F\0\7\9\4\F\F\0\9\8\4\8\E\6\B\4\4\A\7\6\C\0\0\F ]] 00:21:17.306 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3708959 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3708959 ']' 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3708959 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3708959 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3708959' 00:21:17.566 killing process with pid 3708959 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3708959 00:21:17.566 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3708959 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.825 rmmod nvme_tcp 00:21:17.825 rmmod nvme_fabrics 00:21:17.825 rmmod nvme_keyring 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3708940 ']' 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3708940 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3708940 ']' 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3708940 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.825 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3708940 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3708940' 00:21:18.084 killing process with pid 3708940 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3708940 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3708940 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.084 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.622 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.622 00:21:20.622 real 0m12.405s 00:21:20.622 user 0m9.720s 00:21:20.622 sys 0m5.467s 00:21:20.622 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.622 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:20.622 ************************************ 00:21:20.622 END TEST nvmf_nsid 00:21:20.622 ************************************ 00:21:20.622 18:58:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:20.622 00:21:20.622 real 11m58.355s 00:21:20.622 user 25m26.489s 00:21:20.622 sys 3m47.966s 00:21:20.622 18:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.622 18:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:20.622 ************************************ 00:21:20.622 END TEST nvmf_target_extra 00:21:20.622 ************************************ 00:21:20.622 18:58:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:20.622 18:58:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:20.622 18:58:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.622 18:58:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.622 ************************************ 00:21:20.622 START TEST nvmf_host 00:21:20.622 ************************************ 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:20.622 * Looking for test storage... 00:21:20.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.622 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.623 --rc genhtml_branch_coverage=1 00:21:20.623 --rc genhtml_function_coverage=1 00:21:20.623 --rc genhtml_legend=1 00:21:20.623 --rc geninfo_all_blocks=1 00:21:20.623 --rc geninfo_unexecuted_blocks=1 00:21:20.623 00:21:20.623 ' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.623 --rc genhtml_branch_coverage=1 00:21:20.623 --rc genhtml_function_coverage=1 00:21:20.623 --rc genhtml_legend=1 00:21:20.623 --rc geninfo_all_blocks=1 00:21:20.623 --rc geninfo_unexecuted_blocks=1 00:21:20.623 00:21:20.623 ' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.623 --rc genhtml_branch_coverage=1 00:21:20.623 --rc genhtml_function_coverage=1 00:21:20.623 --rc genhtml_legend=1 00:21:20.623 --rc geninfo_all_blocks=1 00:21:20.623 --rc geninfo_unexecuted_blocks=1 00:21:20.623 00:21:20.623 ' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.623 --rc genhtml_branch_coverage=1 00:21:20.623 --rc genhtml_function_coverage=1 00:21:20.623 --rc genhtml_legend=1 00:21:20.623 --rc geninfo_all_blocks=1 00:21:20.623 --rc geninfo_unexecuted_blocks=1 00:21:20.623 00:21:20.623 ' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.623 ************************************ 00:21:20.623 START TEST nvmf_multicontroller 00:21:20.623 ************************************ 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:20.623 * Looking for test storage... 00:21:20.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:20.623 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.624 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.624 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:20.624 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:20.624 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.624 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:20.624 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.624 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:20.883 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:20.883 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.883 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:20.883 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.884 --rc genhtml_branch_coverage=1 00:21:20.884 --rc genhtml_function_coverage=1 00:21:20.884 --rc genhtml_legend=1 00:21:20.884 --rc geninfo_all_blocks=1 00:21:20.884 --rc geninfo_unexecuted_blocks=1 00:21:20.884 00:21:20.884 ' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.884 --rc genhtml_branch_coverage=1 00:21:20.884 --rc genhtml_function_coverage=1 00:21:20.884 --rc genhtml_legend=1 00:21:20.884 --rc geninfo_all_blocks=1 00:21:20.884 --rc geninfo_unexecuted_blocks=1 00:21:20.884 00:21:20.884 ' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.884 --rc genhtml_branch_coverage=1 00:21:20.884 --rc genhtml_function_coverage=1 00:21:20.884 --rc genhtml_legend=1 00:21:20.884 --rc geninfo_all_blocks=1 00:21:20.884 --rc geninfo_unexecuted_blocks=1 00:21:20.884 00:21:20.884 ' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.884 --rc genhtml_branch_coverage=1 00:21:20.884 --rc genhtml_function_coverage=1 00:21:20.884 --rc genhtml_legend=1 00:21:20.884 --rc geninfo_all_blocks=1 00:21:20.884 --rc geninfo_unexecuted_blocks=1 00:21:20.884 00:21:20.884 ' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.884 18:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.456 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.456 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.456 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.457 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.457 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:21:27.457 00:21:27.457 --- 10.0.0.2 ping statistics --- 00:21:27.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.457 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:21:27.457 00:21:27.457 --- 10.0.0.1 ping statistics --- 00:21:27.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.457 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3713274 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3713274 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3713274 ']' 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.457 18:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 [2024-11-20 18:58:49.028136] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:27.457 [2024-11-20 18:58:49.028178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.457 [2024-11-20 18:58:49.107511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.457 [2024-11-20 18:58:49.149110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.457 [2024-11-20 18:58:49.149147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.457 [2024-11-20 18:58:49.149154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.457 [2024-11-20 18:58:49.149160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.457 [2024-11-20 18:58:49.149165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.457 [2024-11-20 18:58:49.150586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.457 [2024-11-20 18:58:49.150696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.457 [2024-11-20 18:58:49.150698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 [2024-11-20 18:58:49.299135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 Malloc0 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.457 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 [2024-11-20 18:58:49.366504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 [2024-11-20 18:58:49.374449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 Malloc1 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3713301 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3713301 /var/tmp/bdevperf.sock 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3713301 ']' 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 NVMe0n1 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.458 1 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.458 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.718 request: 00:21:27.718 { 00:21:27.718 "name": "NVMe0", 00:21:27.718 "trtype": "tcp", 00:21:27.718 "traddr": "10.0.0.2", 00:21:27.718 "adrfam": "ipv4", 00:21:27.718 "trsvcid": "4420", 00:21:27.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.718 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:27.718 "hostaddr": "10.0.0.1", 00:21:27.718 "prchk_reftag": false, 00:21:27.718 "prchk_guard": false, 00:21:27.718 "hdgst": false, 00:21:27.718 "ddgst": false, 00:21:27.718 "allow_unrecognized_csi": false, 00:21:27.718 "method": "bdev_nvme_attach_controller", 00:21:27.718 "req_id": 1 00:21:27.718 } 00:21:27.718 Got JSON-RPC error response 00:21:27.718 response: 00:21:27.718 { 00:21:27.718 "code": -114, 00:21:27.718 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:27.718 } 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.718 request: 00:21:27.718 { 00:21:27.718 "name": "NVMe0", 00:21:27.718 "trtype": "tcp", 00:21:27.718 "traddr": "10.0.0.2", 00:21:27.718 "adrfam": "ipv4", 00:21:27.718 "trsvcid": "4420", 00:21:27.718 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.718 "hostaddr": "10.0.0.1", 00:21:27.718 "prchk_reftag": false, 00:21:27.718 "prchk_guard": false, 00:21:27.718 "hdgst": false, 00:21:27.718 "ddgst": false, 00:21:27.718 "allow_unrecognized_csi": false, 00:21:27.718 "method": "bdev_nvme_attach_controller", 00:21:27.718 "req_id": 1 00:21:27.718 } 00:21:27.718 Got JSON-RPC error response 00:21:27.718 response: 00:21:27.718 { 00:21:27.718 "code": -114, 00:21:27.718 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:27.718 } 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:27.718 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.719 request: 00:21:27.719 { 00:21:27.719 "name": "NVMe0", 00:21:27.719 "trtype": "tcp", 00:21:27.719 "traddr": "10.0.0.2", 00:21:27.719 "adrfam": "ipv4", 00:21:27.719 "trsvcid": "4420", 00:21:27.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.719 "hostaddr": "10.0.0.1", 00:21:27.719 "prchk_reftag": false, 00:21:27.719 "prchk_guard": false, 00:21:27.719 "hdgst": false, 00:21:27.719 "ddgst": false, 00:21:27.719 "multipath": "disable", 00:21:27.719 "allow_unrecognized_csi": false, 00:21:27.719 "method": "bdev_nvme_attach_controller", 00:21:27.719 "req_id": 1 00:21:27.719 } 00:21:27.719 Got JSON-RPC error response 00:21:27.719 response: 00:21:27.719 { 00:21:27.719 "code": -114, 00:21:27.719 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:27.719 } 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.719 request: 00:21:27.719 { 00:21:27.719 "name": "NVMe0", 00:21:27.719 "trtype": "tcp", 00:21:27.719 "traddr": "10.0.0.2", 00:21:27.719 "adrfam": "ipv4", 00:21:27.719 "trsvcid": "4420", 00:21:27.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.719 "hostaddr": "10.0.0.1", 00:21:27.719 "prchk_reftag": false, 00:21:27.719 "prchk_guard": false, 00:21:27.719 "hdgst": false, 00:21:27.719 "ddgst": false, 00:21:27.719 "multipath": "failover", 00:21:27.719 "allow_unrecognized_csi": false, 00:21:27.719 "method": "bdev_nvme_attach_controller", 00:21:27.719 "req_id": 1 00:21:27.719 } 00:21:27.719 Got JSON-RPC error response 00:21:27.719 response: 00:21:27.719 { 00:21:27.719 "code": -114, 00:21:27.719 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:27.719 } 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.719 NVMe0n1 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.719 18:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.979 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:27.979 18:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.915 { 00:21:28.916 "results": [ 00:21:28.916 { 00:21:28.916 "job": "NVMe0n1", 00:21:28.916 "core_mask": "0x1", 00:21:28.916 "workload": "write", 00:21:28.916 "status": "finished", 00:21:28.916 "queue_depth": 128, 00:21:28.916 "io_size": 4096, 00:21:28.916 "runtime": 1.004982, 00:21:28.916 "iops": 24711.885387001956, 00:21:28.916 "mibps": 96.53080229297639, 00:21:28.916 "io_failed": 0, 00:21:28.916 "io_timeout": 0, 00:21:28.916 "avg_latency_us": 5172.790716020977, 00:21:28.916 "min_latency_us": 3183.177142857143, 00:21:28.916 "max_latency_us": 12358.217142857144 00:21:28.916 } 00:21:28.916 ], 00:21:28.916 "core_count": 1 00:21:28.916 } 00:21:28.916 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:28.916 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.916 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3713301 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3713301 ']' 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3713301 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3713301 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3713301' 00:21:29.175 killing process with pid 3713301 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3713301 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3713301 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:29.175 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:29.175 [2024-11-20 18:58:49.479260] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:29.175 [2024-11-20 18:58:49.479309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713301 ] 00:21:29.175 [2024-11-20 18:58:49.553808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.175 [2024-11-20 18:58:49.596301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.175 [2024-11-20 18:58:50.088829] bdev.c:4906:bdev_name_add: *ERROR*: Bdev name 99aab552-0449-4277-822c-f529c973cf69 already exists 00:21:29.175 [2024-11-20 18:58:50.088858] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:99aab552-0449-4277-822c-f529c973cf69 alias for bdev NVMe1n1 00:21:29.175 [2024-11-20 18:58:50.088866] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:29.175 Running I/O for 1 seconds... 00:21:29.175 24707.00 IOPS, 96.51 MiB/s 00:21:29.175 Latency(us) 00:21:29.175 [2024-11-20T17:58:51.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.175 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:29.175 NVMe0n1 : 1.00 24711.89 96.53 0.00 0.00 5172.79 3183.18 12358.22 00:21:29.175 [2024-11-20T17:58:51.500Z] =================================================================================================================== 00:21:29.175 [2024-11-20T17:58:51.500Z] Total : 24711.89 96.53 0.00 0.00 5172.79 3183.18 12358.22 00:21:29.175 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.175 00:21:29.175 Latency(us) 00:21:29.175 [2024-11-20T17:58:51.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.175 [2024-11-20T17:58:51.500Z] =================================================================================================================== 00:21:29.175 [2024-11-20T17:58:51.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.175 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:29.175 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.435 rmmod nvme_tcp 00:21:29.435 rmmod nvme_fabrics 00:21:29.435 rmmod nvme_keyring 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3713274 ']' 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3713274 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3713274 ']' 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3713274 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3713274 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3713274' 00:21:29.435 killing process with pid 3713274 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3713274 00:21:29.435 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3713274 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.695 18:58:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.601 18:58:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.601 00:21:31.601 real 0m11.128s 00:21:31.601 user 0m11.866s 00:21:31.601 sys 0m5.266s 00:21:31.601 18:58:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.601 18:58:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.601 ************************************ 00:21:31.601 END TEST nvmf_multicontroller 00:21:31.601 ************************************ 00:21:31.860 18:58:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:31.860 18:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.860 18:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.860 18:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.860 ************************************ 00:21:31.860 START TEST nvmf_aer 00:21:31.860 ************************************ 00:21:31.860 18:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:31.860 * Looking for test storage... 00:21:31.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.860 --rc genhtml_branch_coverage=1 00:21:31.860 --rc genhtml_function_coverage=1 00:21:31.860 --rc genhtml_legend=1 00:21:31.860 --rc geninfo_all_blocks=1 00:21:31.860 --rc geninfo_unexecuted_blocks=1 00:21:31.860 00:21:31.860 ' 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.860 --rc genhtml_branch_coverage=1 00:21:31.860 --rc genhtml_function_coverage=1 00:21:31.860 --rc genhtml_legend=1 00:21:31.860 --rc geninfo_all_blocks=1 00:21:31.860 --rc geninfo_unexecuted_blocks=1 00:21:31.860 00:21:31.860 ' 00:21:31.860 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.860 --rc genhtml_branch_coverage=1 00:21:31.860 --rc genhtml_function_coverage=1 00:21:31.860 --rc genhtml_legend=1 00:21:31.860 --rc geninfo_all_blocks=1 00:21:31.860 --rc geninfo_unexecuted_blocks=1 00:21:31.860 00:21:31.860 ' 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.861 --rc genhtml_branch_coverage=1 00:21:31.861 --rc genhtml_function_coverage=1 00:21:31.861 --rc genhtml_legend=1 00:21:31.861 --rc geninfo_all_blocks=1 00:21:31.861 --rc geninfo_unexecuted_blocks=1 00:21:31.861 00:21:31.861 ' 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.861 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:32.120 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.120 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.120 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.120 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.120 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:32.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:32.121 18:58:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:38.769 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:38.769 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:38.769 Found net devices under 0000:86:00.0: cvl_0_0 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:38.769 Found net devices under 0000:86:00.1: cvl_0_1 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.769 18:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.769 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:21:38.770 00:21:38.770 --- 10.0.0.2 ping statistics --- 00:21:38.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.770 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:21:38.770 00:21:38.770 --- 10.0.0.1 ping statistics --- 00:21:38.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.770 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3717298 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3717298 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3717298 ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 [2024-11-20 18:59:00.247567] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:38.770 [2024-11-20 18:59:00.247610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.770 [2024-11-20 18:59:00.324160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.770 [2024-11-20 18:59:00.366806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.770 [2024-11-20 18:59:00.366844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.770 [2024-11-20 18:59:00.366851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.770 [2024-11-20 18:59:00.366857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.770 [2024-11-20 18:59:00.366863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.770 [2024-11-20 18:59:00.368431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.770 [2024-11-20 18:59:00.368531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.770 [2024-11-20 18:59:00.368636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.770 [2024-11-20 18:59:00.368637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 [2024-11-20 18:59:00.505439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 Malloc0 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 [2024-11-20 18:59:00.560693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 [ 00:21:38.770 { 00:21:38.770 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:38.770 "subtype": "Discovery", 00:21:38.770 "listen_addresses": [], 00:21:38.770 "allow_any_host": true, 00:21:38.770 "hosts": [] 00:21:38.770 }, 00:21:38.770 { 00:21:38.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.770 "subtype": "NVMe", 00:21:38.770 "listen_addresses": [ 00:21:38.770 { 00:21:38.770 "trtype": "TCP", 00:21:38.770 "adrfam": "IPv4", 00:21:38.770 "traddr": "10.0.0.2", 00:21:38.770 "trsvcid": "4420" 00:21:38.770 } 00:21:38.770 ], 00:21:38.770 "allow_any_host": true, 00:21:38.770 "hosts": [], 00:21:38.770 "serial_number": "SPDK00000000000001", 00:21:38.770 "model_number": "SPDK bdev Controller", 00:21:38.770 "max_namespaces": 2, 00:21:38.770 "min_cntlid": 1, 00:21:38.770 "max_cntlid": 65519, 00:21:38.770 "namespaces": [ 00:21:38.770 { 00:21:38.770 "nsid": 1, 00:21:38.770 "bdev_name": "Malloc0", 00:21:38.770 "name": "Malloc0", 00:21:38.770 "nguid": "B5272DFE9FD4484B87BBB976C8374586", 00:21:38.770 "uuid": "b5272dfe-9fd4-484b-87bb-b976c8374586" 00:21:38.770 } 00:21:38.770 ] 00:21:38.770 } 00:21:38.770 ] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3717328 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.770 Malloc1 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:38.770 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.771 Asynchronous Event Request test 00:21:38.771 Attaching to 10.0.0.2 00:21:38.771 Attached to 10.0.0.2 00:21:38.771 Registering asynchronous event callbacks... 00:21:38.771 Starting namespace attribute notice tests for all controllers... 00:21:38.771 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:38.771 aer_cb - Changed Namespace 00:21:38.771 Cleaning up... 00:21:38.771 [ 00:21:38.771 { 00:21:38.771 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:38.771 "subtype": "Discovery", 00:21:38.771 "listen_addresses": [], 00:21:38.771 "allow_any_host": true, 00:21:38.771 "hosts": [] 00:21:38.771 }, 00:21:38.771 { 00:21:38.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.771 "subtype": "NVMe", 00:21:38.771 "listen_addresses": [ 00:21:38.771 { 00:21:38.771 "trtype": "TCP", 00:21:38.771 "adrfam": "IPv4", 00:21:38.771 "traddr": "10.0.0.2", 00:21:38.771 "trsvcid": "4420" 00:21:38.771 } 00:21:38.771 ], 00:21:38.771 "allow_any_host": true, 00:21:38.771 "hosts": [], 00:21:38.771 "serial_number": "SPDK00000000000001", 00:21:38.771 "model_number": "SPDK bdev Controller", 00:21:38.771 "max_namespaces": 2, 00:21:38.771 "min_cntlid": 1, 00:21:38.771 "max_cntlid": 65519, 00:21:38.771 "namespaces": [ 00:21:38.771 { 00:21:38.771 "nsid": 1, 00:21:38.771 "bdev_name": "Malloc0", 00:21:38.771 "name": "Malloc0", 00:21:38.771 "nguid": "B5272DFE9FD4484B87BBB976C8374586", 00:21:38.771 "uuid": "b5272dfe-9fd4-484b-87bb-b976c8374586" 00:21:38.771 }, 00:21:38.771 { 00:21:38.771 "nsid": 2, 00:21:38.771 "bdev_name": "Malloc1", 00:21:38.771 "name": "Malloc1", 00:21:38.771 "nguid": "ED1D55B3048B4FB68E5966605ECB8290", 00:21:38.771 "uuid": "ed1d55b3-048b-4fb6-8e59-66605ecb8290" 00:21:38.771 } 00:21:38.771 ] 00:21:38.771 } 00:21:38.771 ] 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3717328 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.771 rmmod nvme_tcp 00:21:38.771 rmmod nvme_fabrics 00:21:38.771 rmmod nvme_keyring 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3717298 ']' 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3717298 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3717298 ']' 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3717298 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.771 18:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717298 00:21:38.771 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.771 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.771 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717298' 00:21:38.771 killing process with pid 3717298 00:21:38.771 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3717298 00:21:38.771 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3717298 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.031 18:59:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.936 18:59:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:41.194 00:21:41.194 real 0m9.279s 00:21:41.194 user 0m4.958s 00:21:41.194 sys 0m4.958s 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:41.194 ************************************ 00:21:41.194 END TEST nvmf_aer 00:21:41.194 ************************************ 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.194 ************************************ 00:21:41.194 START TEST nvmf_async_init 00:21:41.194 ************************************ 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:41.194 * Looking for test storage... 00:21:41.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:41.194 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:41.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.195 --rc genhtml_branch_coverage=1 00:21:41.195 --rc genhtml_function_coverage=1 00:21:41.195 --rc genhtml_legend=1 00:21:41.195 --rc geninfo_all_blocks=1 00:21:41.195 --rc geninfo_unexecuted_blocks=1 00:21:41.195 00:21:41.195 ' 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:41.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.195 --rc genhtml_branch_coverage=1 00:21:41.195 --rc genhtml_function_coverage=1 00:21:41.195 --rc genhtml_legend=1 00:21:41.195 --rc geninfo_all_blocks=1 00:21:41.195 --rc geninfo_unexecuted_blocks=1 00:21:41.195 00:21:41.195 ' 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:41.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.195 --rc genhtml_branch_coverage=1 00:21:41.195 --rc genhtml_function_coverage=1 00:21:41.195 --rc genhtml_legend=1 00:21:41.195 --rc geninfo_all_blocks=1 00:21:41.195 --rc geninfo_unexecuted_blocks=1 00:21:41.195 00:21:41.195 ' 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:41.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.195 --rc genhtml_branch_coverage=1 00:21:41.195 --rc genhtml_function_coverage=1 00:21:41.195 --rc genhtml_legend=1 00:21:41.195 --rc geninfo_all_blocks=1 00:21:41.195 --rc geninfo_unexecuted_blocks=1 00:21:41.195 00:21:41.195 ' 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.195 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.453 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:41.453 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:41.453 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.453 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c341829b862641cf87763c6bc1f3be65 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.454 18:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:48.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.022 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:48.023 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:48.023 Found net devices under 0000:86:00.0: cvl_0_0 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:48.023 Found net devices under 0000:86:00.1: cvl_0_1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:21:48.023 00:21:48.023 --- 10.0.0.2 ping statistics --- 00:21:48.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.023 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:21:48.023 00:21:48.023 --- 10.0.0.1 ping statistics --- 00:21:48.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.023 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3720852 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3720852 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3720852 ']' 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.023 [2024-11-20 18:59:09.576532] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:48.023 [2024-11-20 18:59:09.576585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.023 [2024-11-20 18:59:09.657847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.023 [2024-11-20 18:59:09.696886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.023 [2024-11-20 18:59:09.696922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.023 [2024-11-20 18:59:09.696929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.023 [2024-11-20 18:59:09.696935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.023 [2024-11-20 18:59:09.696940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.023 [2024-11-20 18:59:09.697496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.023 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.023 [2024-11-20 18:59:09.842729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 null0 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c341829b862641cf87763c6bc1f3be65 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 [2024-11-20 18:59:09.895011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 nvme0n1 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 [ 00:21:48.024 { 00:21:48.024 "name": "nvme0n1", 00:21:48.024 "aliases": [ 00:21:48.024 "c341829b-8626-41cf-8776-3c6bc1f3be65" 00:21:48.024 ], 00:21:48.024 "product_name": "NVMe disk", 00:21:48.024 "block_size": 512, 00:21:48.024 "num_blocks": 2097152, 00:21:48.024 "uuid": "c341829b-8626-41cf-8776-3c6bc1f3be65", 00:21:48.024 "numa_id": 1, 00:21:48.024 "assigned_rate_limits": { 00:21:48.024 "rw_ios_per_sec": 0, 00:21:48.024 "rw_mbytes_per_sec": 0, 00:21:48.024 "r_mbytes_per_sec": 0, 00:21:48.024 "w_mbytes_per_sec": 0 00:21:48.024 }, 00:21:48.024 "claimed": false, 00:21:48.024 "zoned": false, 00:21:48.024 "supported_io_types": { 00:21:48.024 "read": true, 00:21:48.024 "write": true, 00:21:48.024 "unmap": false, 00:21:48.024 "flush": true, 00:21:48.024 "reset": true, 00:21:48.024 "nvme_admin": true, 00:21:48.024 "nvme_io": true, 00:21:48.024 "nvme_io_md": false, 00:21:48.024 "write_zeroes": true, 00:21:48.024 "zcopy": false, 00:21:48.024 "get_zone_info": false, 00:21:48.024 "zone_management": false, 00:21:48.024 "zone_append": false, 00:21:48.024 "compare": true, 00:21:48.024 "compare_and_write": true, 00:21:48.024 "abort": true, 00:21:48.024 "seek_hole": false, 00:21:48.024 "seek_data": false, 00:21:48.024 "copy": true, 00:21:48.024 "nvme_iov_md": false 00:21:48.024 }, 00:21:48.024 "memory_domains": [ 00:21:48.024 { 00:21:48.024 "dma_device_id": "system", 00:21:48.024 "dma_device_type": 1 00:21:48.024 } 00:21:48.024 ], 00:21:48.024 "driver_specific": { 00:21:48.024 "nvme": [ 00:21:48.024 { 00:21:48.024 "trid": { 00:21:48.024 "trtype": "TCP", 00:21:48.024 "adrfam": "IPv4", 00:21:48.024 "traddr": "10.0.0.2", 00:21:48.024 "trsvcid": "4420", 00:21:48.024 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:48.024 }, 00:21:48.024 "ctrlr_data": { 00:21:48.024 "cntlid": 1, 00:21:48.024 "vendor_id": "0x8086", 00:21:48.024 "model_number": "SPDK bdev Controller", 00:21:48.024 "serial_number": "00000000000000000000", 00:21:48.024 "firmware_revision": "25.01", 00:21:48.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:48.024 "oacs": { 00:21:48.024 "security": 0, 00:21:48.024 "format": 0, 00:21:48.024 "firmware": 0, 00:21:48.024 "ns_manage": 0 00:21:48.024 }, 00:21:48.024 "multi_ctrlr": true, 00:21:48.024 "ana_reporting": false 00:21:48.024 }, 00:21:48.024 "vs": { 00:21:48.024 "nvme_version": "1.3" 00:21:48.024 }, 00:21:48.024 "ns_data": { 00:21:48.024 "id": 1, 00:21:48.024 "can_share": true 00:21:48.024 } 00:21:48.024 } 00:21:48.024 ], 00:21:48.024 "mp_policy": "active_passive" 00:21:48.024 } 00:21:48.024 } 00:21:48.024 ] 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 [2024-11-20 18:59:10.163560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:48.024 [2024-11-20 18:59:10.163622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b39900 (9): Bad file descriptor 00:21:48.024 [2024-11-20 18:59:10.295296] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 [ 00:21:48.024 { 00:21:48.024 "name": "nvme0n1", 00:21:48.024 "aliases": [ 00:21:48.024 "c341829b-8626-41cf-8776-3c6bc1f3be65" 00:21:48.024 ], 00:21:48.024 "product_name": "NVMe disk", 00:21:48.024 "block_size": 512, 00:21:48.024 "num_blocks": 2097152, 00:21:48.024 "uuid": "c341829b-8626-41cf-8776-3c6bc1f3be65", 00:21:48.024 "numa_id": 1, 00:21:48.024 "assigned_rate_limits": { 00:21:48.024 "rw_ios_per_sec": 0, 00:21:48.024 "rw_mbytes_per_sec": 0, 00:21:48.024 "r_mbytes_per_sec": 0, 00:21:48.024 "w_mbytes_per_sec": 0 00:21:48.024 }, 00:21:48.024 "claimed": false, 00:21:48.024 "zoned": false, 00:21:48.024 "supported_io_types": { 00:21:48.024 "read": true, 00:21:48.024 "write": true, 00:21:48.024 "unmap": false, 00:21:48.024 "flush": true, 00:21:48.024 "reset": true, 00:21:48.024 "nvme_admin": true, 00:21:48.024 "nvme_io": true, 00:21:48.024 "nvme_io_md": false, 00:21:48.024 "write_zeroes": true, 00:21:48.024 "zcopy": false, 00:21:48.024 "get_zone_info": false, 00:21:48.024 "zone_management": false, 00:21:48.024 "zone_append": false, 00:21:48.024 "compare": true, 00:21:48.024 "compare_and_write": true, 00:21:48.024 "abort": true, 00:21:48.024 "seek_hole": false, 00:21:48.024 "seek_data": false, 00:21:48.024 "copy": true, 00:21:48.024 "nvme_iov_md": false 00:21:48.024 }, 00:21:48.024 "memory_domains": [ 00:21:48.024 { 00:21:48.024 "dma_device_id": "system", 00:21:48.024 "dma_device_type": 1 00:21:48.024 } 00:21:48.024 ], 00:21:48.024 "driver_specific": { 00:21:48.024 "nvme": [ 00:21:48.024 { 00:21:48.024 "trid": { 00:21:48.024 "trtype": "TCP", 00:21:48.024 "adrfam": "IPv4", 00:21:48.024 "traddr": "10.0.0.2", 00:21:48.024 "trsvcid": "4420", 00:21:48.024 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:48.024 }, 00:21:48.024 "ctrlr_data": { 00:21:48.024 "cntlid": 2, 00:21:48.024 "vendor_id": "0x8086", 00:21:48.024 "model_number": "SPDK bdev Controller", 00:21:48.024 "serial_number": "00000000000000000000", 00:21:48.024 "firmware_revision": "25.01", 00:21:48.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:48.024 "oacs": { 00:21:48.024 "security": 0, 00:21:48.024 "format": 0, 00:21:48.024 "firmware": 0, 00:21:48.024 "ns_manage": 0 00:21:48.024 }, 00:21:48.024 "multi_ctrlr": true, 00:21:48.024 "ana_reporting": false 00:21:48.024 }, 00:21:48.024 "vs": { 00:21:48.024 "nvme_version": "1.3" 00:21:48.024 }, 00:21:48.024 "ns_data": { 00:21:48.024 "id": 1, 00:21:48.024 "can_share": true 00:21:48.024 } 00:21:48.024 } 00:21:48.024 ], 00:21:48.024 "mp_policy": "active_passive" 00:21:48.024 } 00:21:48.024 } 00:21:48.024 ] 00:21:48.024 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rFR5xP3KoI 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rFR5xP3KoI 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.rFR5xP3KoI 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.025 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.283 [2024-11-20 18:59:10.372190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.283 [2024-11-20 18:59:10.372308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.283 [2024-11-20 18:59:10.392257] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.283 nvme0n1 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.283 [ 00:21:48.283 { 00:21:48.283 "name": "nvme0n1", 00:21:48.283 "aliases": [ 00:21:48.283 "c341829b-8626-41cf-8776-3c6bc1f3be65" 00:21:48.283 ], 00:21:48.283 "product_name": "NVMe disk", 00:21:48.283 "block_size": 512, 00:21:48.283 "num_blocks": 2097152, 00:21:48.283 "uuid": "c341829b-8626-41cf-8776-3c6bc1f3be65", 00:21:48.283 "numa_id": 1, 00:21:48.283 "assigned_rate_limits": { 00:21:48.283 "rw_ios_per_sec": 0, 00:21:48.283 "rw_mbytes_per_sec": 0, 00:21:48.283 "r_mbytes_per_sec": 0, 00:21:48.283 "w_mbytes_per_sec": 0 00:21:48.283 }, 00:21:48.283 "claimed": false, 00:21:48.283 "zoned": false, 00:21:48.283 "supported_io_types": { 00:21:48.283 "read": true, 00:21:48.283 "write": true, 00:21:48.283 "unmap": false, 00:21:48.283 "flush": true, 00:21:48.283 "reset": true, 00:21:48.283 "nvme_admin": true, 00:21:48.283 "nvme_io": true, 00:21:48.283 "nvme_io_md": false, 00:21:48.283 "write_zeroes": true, 00:21:48.283 "zcopy": false, 00:21:48.283 "get_zone_info": false, 00:21:48.283 "zone_management": false, 00:21:48.283 "zone_append": false, 00:21:48.283 "compare": true, 00:21:48.283 "compare_and_write": true, 00:21:48.283 "abort": true, 00:21:48.283 "seek_hole": false, 00:21:48.283 "seek_data": false, 00:21:48.283 "copy": true, 00:21:48.283 "nvme_iov_md": false 00:21:48.283 }, 00:21:48.283 "memory_domains": [ 00:21:48.283 { 00:21:48.283 "dma_device_id": "system", 00:21:48.283 "dma_device_type": 1 00:21:48.283 } 00:21:48.283 ], 00:21:48.283 "driver_specific": { 00:21:48.283 "nvme": [ 00:21:48.283 { 00:21:48.283 "trid": { 00:21:48.283 "trtype": "TCP", 00:21:48.283 "adrfam": "IPv4", 00:21:48.283 "traddr": "10.0.0.2", 00:21:48.283 "trsvcid": "4421", 00:21:48.283 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:48.283 }, 00:21:48.283 "ctrlr_data": { 00:21:48.283 "cntlid": 3, 00:21:48.283 "vendor_id": "0x8086", 00:21:48.283 "model_number": "SPDK bdev Controller", 00:21:48.283 "serial_number": "00000000000000000000", 00:21:48.283 "firmware_revision": "25.01", 00:21:48.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:48.283 "oacs": { 00:21:48.283 "security": 0, 00:21:48.283 "format": 0, 00:21:48.283 "firmware": 0, 00:21:48.283 "ns_manage": 0 00:21:48.283 }, 00:21:48.283 "multi_ctrlr": true, 00:21:48.283 "ana_reporting": false 00:21:48.283 }, 00:21:48.283 "vs": { 00:21:48.283 "nvme_version": "1.3" 00:21:48.283 }, 00:21:48.283 "ns_data": { 00:21:48.283 "id": 1, 00:21:48.283 "can_share": true 00:21:48.283 } 00:21:48.283 } 00:21:48.283 ], 00:21:48.283 "mp_policy": "active_passive" 00:21:48.283 } 00:21:48.283 } 00:21:48.283 ] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.rFR5xP3KoI 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.283 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.283 rmmod nvme_tcp 00:21:48.283 rmmod nvme_fabrics 00:21:48.283 rmmod nvme_keyring 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3720852 ']' 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3720852 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3720852 ']' 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3720852 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.284 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3720852 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3720852' 00:21:48.542 killing process with pid 3720852 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3720852 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3720852 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.542 18:59:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.075 18:59:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.075 00:21:51.075 real 0m9.498s 00:21:51.075 user 0m3.046s 00:21:51.075 sys 0m4.881s 00:21:51.075 18:59:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.075 18:59:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.075 ************************************ 00:21:51.075 END TEST nvmf_async_init 00:21:51.075 ************************************ 00:21:51.075 18:59:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.075 18:59:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.075 18:59:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.075 18:59:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.075 ************************************ 00:21:51.075 START TEST dma 00:21:51.075 ************************************ 00:21:51.076 18:59:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.076 * Looking for test storage... 00:21:51.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.076 18:59:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.076 18:59:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.076 18:59:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.076 --rc genhtml_branch_coverage=1 00:21:51.076 --rc genhtml_function_coverage=1 00:21:51.076 --rc genhtml_legend=1 00:21:51.076 --rc geninfo_all_blocks=1 00:21:51.076 --rc geninfo_unexecuted_blocks=1 00:21:51.076 00:21:51.076 ' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.076 --rc genhtml_branch_coverage=1 00:21:51.076 --rc genhtml_function_coverage=1 00:21:51.076 --rc genhtml_legend=1 00:21:51.076 --rc geninfo_all_blocks=1 00:21:51.076 --rc geninfo_unexecuted_blocks=1 00:21:51.076 00:21:51.076 ' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.076 --rc genhtml_branch_coverage=1 00:21:51.076 --rc genhtml_function_coverage=1 00:21:51.076 --rc genhtml_legend=1 00:21:51.076 --rc geninfo_all_blocks=1 00:21:51.076 --rc geninfo_unexecuted_blocks=1 00:21:51.076 00:21:51.076 ' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.076 --rc genhtml_branch_coverage=1 00:21:51.076 --rc genhtml_function_coverage=1 00:21:51.076 --rc genhtml_legend=1 00:21:51.076 --rc geninfo_all_blocks=1 00:21:51.076 --rc geninfo_unexecuted_blocks=1 00:21:51.076 00:21:51.076 ' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.076 18:59:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:51.077 00:21:51.077 real 0m0.209s 00:21:51.077 user 0m0.124s 00:21:51.077 sys 0m0.099s 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:51.077 ************************************ 00:21:51.077 END TEST dma 00:21:51.077 ************************************ 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.077 ************************************ 00:21:51.077 START TEST nvmf_identify 00:21:51.077 ************************************ 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:51.077 * Looking for test storage... 00:21:51.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.077 --rc genhtml_branch_coverage=1 00:21:51.077 --rc genhtml_function_coverage=1 00:21:51.077 --rc genhtml_legend=1 00:21:51.077 --rc geninfo_all_blocks=1 00:21:51.077 --rc geninfo_unexecuted_blocks=1 00:21:51.077 00:21:51.077 ' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.077 --rc genhtml_branch_coverage=1 00:21:51.077 --rc genhtml_function_coverage=1 00:21:51.077 --rc genhtml_legend=1 00:21:51.077 --rc geninfo_all_blocks=1 00:21:51.077 --rc geninfo_unexecuted_blocks=1 00:21:51.077 00:21:51.077 ' 00:21:51.077 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.077 --rc genhtml_branch_coverage=1 00:21:51.077 --rc genhtml_function_coverage=1 00:21:51.077 --rc genhtml_legend=1 00:21:51.077 --rc geninfo_all_blocks=1 00:21:51.077 --rc geninfo_unexecuted_blocks=1 00:21:51.078 00:21:51.078 ' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.078 --rc genhtml_branch_coverage=1 00:21:51.078 --rc genhtml_function_coverage=1 00:21:51.078 --rc genhtml_legend=1 00:21:51.078 --rc geninfo_all_blocks=1 00:21:51.078 --rc geninfo_unexecuted_blocks=1 00:21:51.078 00:21:51.078 ' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.078 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.337 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.338 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.338 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.338 18:59:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.905 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.906 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.906 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.906 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:57.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:21:57.906 00:21:57.906 --- 10.0.0.2 ping statistics --- 00:21:57.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.906 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:57.906 00:21:57.906 --- 10.0.0.1 ping statistics --- 00:21:57.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.906 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.906 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3724677 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3724677 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3724677 ']' 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.907 18:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.907 [2024-11-20 18:59:19.382656] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:57.907 [2024-11-20 18:59:19.382705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.907 [2024-11-20 18:59:19.462691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.907 [2024-11-20 18:59:19.505955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.907 [2024-11-20 18:59:19.505992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.907 [2024-11-20 18:59:19.505999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.907 [2024-11-20 18:59:19.506005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.907 [2024-11-20 18:59:19.506010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.907 [2024-11-20 18:59:19.507618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.907 [2024-11-20 18:59:19.507726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.907 [2024-11-20 18:59:19.507834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.907 [2024-11-20 18:59:19.507835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.167 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.167 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:58.167 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.167 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.167 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 [2024-11-20 18:59:20.240022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 Malloc0 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 [2024-11-20 18:59:20.340213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 [ 00:21:58.168 { 00:21:58.168 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:58.168 "subtype": "Discovery", 00:21:58.168 "listen_addresses": [ 00:21:58.168 { 00:21:58.168 "trtype": "TCP", 00:21:58.168 "adrfam": "IPv4", 00:21:58.168 "traddr": "10.0.0.2", 00:21:58.168 "trsvcid": "4420" 00:21:58.168 } 00:21:58.168 ], 00:21:58.168 "allow_any_host": true, 00:21:58.168 "hosts": [] 00:21:58.168 }, 00:21:58.168 { 00:21:58.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.168 "subtype": "NVMe", 00:21:58.168 "listen_addresses": [ 00:21:58.168 { 00:21:58.168 "trtype": "TCP", 00:21:58.168 "adrfam": "IPv4", 00:21:58.168 "traddr": "10.0.0.2", 00:21:58.168 "trsvcid": "4420" 00:21:58.168 } 00:21:58.168 ], 00:21:58.168 "allow_any_host": true, 00:21:58.168 "hosts": [], 00:21:58.168 "serial_number": "SPDK00000000000001", 00:21:58.168 "model_number": "SPDK bdev Controller", 00:21:58.168 "max_namespaces": 32, 00:21:58.168 "min_cntlid": 1, 00:21:58.168 "max_cntlid": 65519, 00:21:58.168 "namespaces": [ 00:21:58.168 { 00:21:58.168 "nsid": 1, 00:21:58.168 "bdev_name": "Malloc0", 00:21:58.168 "name": "Malloc0", 00:21:58.168 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:58.168 "eui64": "ABCDEF0123456789", 00:21:58.168 "uuid": "d1ff3ce0-26c7-469e-98f6-2a058a2bb653" 00:21:58.168 } 00:21:58.168 ] 00:21:58.168 } 00:21:58.168 ] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.168 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:58.168 [2024-11-20 18:59:20.393479] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:58.168 [2024-11-20 18:59:20.393525] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724925 ] 00:21:58.168 [2024-11-20 18:59:20.432716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:58.168 [2024-11-20 18:59:20.432759] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.168 [2024-11-20 18:59:20.432764] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.168 [2024-11-20 18:59:20.432776] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.168 [2024-11-20 18:59:20.432786] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.168 [2024-11-20 18:59:20.436485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:58.168 [2024-11-20 18:59:20.436517] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x81e690 0 00:21:58.168 [2024-11-20 18:59:20.447212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.168 [2024-11-20 18:59:20.447226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.168 [2024-11-20 18:59:20.447230] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.168 [2024-11-20 18:59:20.447233] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.168 [2024-11-20 18:59:20.447264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.447269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.447273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.168 [2024-11-20 18:59:20.447299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.168 [2024-11-20 18:59:20.447317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.168 [2024-11-20 18:59:20.458210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.168 [2024-11-20 18:59:20.458219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.168 [2024-11-20 18:59:20.458223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.168 [2024-11-20 18:59:20.458235] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.168 [2024-11-20 18:59:20.458241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:58.168 [2024-11-20 18:59:20.458246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:58.168 [2024-11-20 18:59:20.458258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.168 [2024-11-20 18:59:20.458272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-20 18:59:20.458286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.168 [2024-11-20 18:59:20.458459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.168 [2024-11-20 18:59:20.458465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.168 [2024-11-20 18:59:20.458468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.168 [2024-11-20 18:59:20.458476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:58.168 [2024-11-20 18:59:20.458483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:58.168 [2024-11-20 18:59:20.458489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.168 [2024-11-20 18:59:20.458501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-20 18:59:20.458511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.168 [2024-11-20 18:59:20.458604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.168 [2024-11-20 18:59:20.458610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.168 [2024-11-20 18:59:20.458613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.168 [2024-11-20 18:59:20.458621] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:58.168 [2024-11-20 18:59:20.458627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.168 [2024-11-20 18:59:20.458633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.168 [2024-11-20 18:59:20.458639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.168 [2024-11-20 18:59:20.458647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-20 18:59:20.458657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.168 [2024-11-20 18:59:20.458756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.169 [2024-11-20 18:59:20.458761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.169 [2024-11-20 18:59:20.458764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.458767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.169 [2024-11-20 18:59:20.458772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.169 [2024-11-20 18:59:20.458780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.458783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.458786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.458792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.169 [2024-11-20 18:59:20.458801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.169 [2024-11-20 18:59:20.458862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.169 [2024-11-20 18:59:20.458867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.169 [2024-11-20 18:59:20.458870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.458873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.169 [2024-11-20 18:59:20.458877] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:58.169 [2024-11-20 18:59:20.458882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:58.169 [2024-11-20 18:59:20.458888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.169 [2024-11-20 18:59:20.458996] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:58.169 [2024-11-20 18:59:20.459000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.169 [2024-11-20 18:59:20.459007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.169 [2024-11-20 18:59:20.459028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.169 [2024-11-20 18:59:20.459144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.169 [2024-11-20 18:59:20.459150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.169 [2024-11-20 18:59:20.459152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.169 [2024-11-20 18:59:20.459160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.169 [2024-11-20 18:59:20.459167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.169 [2024-11-20 18:59:20.459190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.169 [2024-11-20 18:59:20.459295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.169 [2024-11-20 18:59:20.459301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.169 [2024-11-20 18:59:20.459304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.169 [2024-11-20 18:59:20.459310] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.169 [2024-11-20 18:59:20.459315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:58.169 [2024-11-20 18:59:20.459321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:58.169 [2024-11-20 18:59:20.459331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.169 [2024-11-20 18:59:20.459339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.169 [2024-11-20 18:59:20.459358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.169 [2024-11-20 18:59:20.459453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.169 [2024-11-20 18:59:20.459459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.169 [2024-11-20 18:59:20.459462] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459465] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x81e690): datao=0, datal=4096, cccid=0 00:21:58.169 [2024-11-20 18:59:20.459469] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x880100) on tqpair(0x81e690): expected_datao=0, payload_size=4096 00:21:58.169 [2024-11-20 18:59:20.459473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459480] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459483] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.169 [2024-11-20 18:59:20.459551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.169 [2024-11-20 18:59:20.459554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.169 [2024-11-20 18:59:20.459563] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:58.169 [2024-11-20 18:59:20.459568] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:58.169 [2024-11-20 18:59:20.459571] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:58.169 [2024-11-20 18:59:20.459578] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:58.169 [2024-11-20 18:59:20.459582] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:58.169 [2024-11-20 18:59:20.459588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:58.169 [2024-11-20 18:59:20.459598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.169 [2024-11-20 18:59:20.459604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.169 [2024-11-20 18:59:20.459626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.169 [2024-11-20 18:59:20.459698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.169 [2024-11-20 18:59:20.459703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.169 [2024-11-20 18:59:20.459706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.169 [2024-11-20 18:59:20.459715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.169 [2024-11-20 18:59:20.459732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.169 [2024-11-20 18:59:20.459748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.169 [2024-11-20 18:59:20.459764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.169 [2024-11-20 18:59:20.459779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.169 [2024-11-20 18:59:20.459786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.169 [2024-11-20 18:59:20.459792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.169 [2024-11-20 18:59:20.459795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x81e690) 00:21:58.169 [2024-11-20 18:59:20.459800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.169 [2024-11-20 18:59:20.459810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880100, cid 0, qid 0 00:21:58.169 [2024-11-20 18:59:20.459816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880280, cid 1, qid 0 00:21:58.170 [2024-11-20 18:59:20.459821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880400, cid 2, qid 0 00:21:58.170 [2024-11-20 18:59:20.459825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.170 [2024-11-20 18:59:20.459829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880700, cid 4, qid 0 00:21:58.170 [2024-11-20 18:59:20.459921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.170 [2024-11-20 18:59:20.459927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.170 [2024-11-20 18:59:20.459930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.459933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880700) on tqpair=0x81e690 00:21:58.170 [2024-11-20 18:59:20.459939] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:58.170 [2024-11-20 18:59:20.459944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:58.170 [2024-11-20 18:59:20.459954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.459957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x81e690) 00:21:58.170 [2024-11-20 18:59:20.459963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-20 18:59:20.459972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880700, cid 4, qid 0 00:21:58.170 [2024-11-20 18:59:20.460054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.170 [2024-11-20 18:59:20.460059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.170 [2024-11-20 18:59:20.460062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x81e690): datao=0, datal=4096, cccid=4 00:21:58.170 [2024-11-20 18:59:20.460069] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x880700) on tqpair(0x81e690): expected_datao=0, payload_size=4096 00:21:58.170 [2024-11-20 18:59:20.460073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460078] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460082] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.170 [2024-11-20 18:59:20.460096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.170 [2024-11-20 18:59:20.460098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880700) on tqpair=0x81e690 00:21:58.170 [2024-11-20 18:59:20.460113] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:58.170 [2024-11-20 18:59:20.460131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x81e690) 00:21:58.170 [2024-11-20 18:59:20.460141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-20 18:59:20.460146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x81e690) 00:21:58.170 [2024-11-20 18:59:20.460157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.170 [2024-11-20 18:59:20.460172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880700, cid 4, qid 0 00:21:58.170 [2024-11-20 18:59:20.460177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880880, cid 5, qid 0 00:21:58.170 [2024-11-20 18:59:20.460305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.170 [2024-11-20 18:59:20.460311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.170 [2024-11-20 18:59:20.460314] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460317] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x81e690): datao=0, datal=1024, cccid=4 00:21:58.170 [2024-11-20 18:59:20.460321] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x880700) on tqpair(0x81e690): expected_datao=0, payload_size=1024 00:21:58.170 [2024-11-20 18:59:20.460324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460330] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460333] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.170 [2024-11-20 18:59:20.460342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.170 [2024-11-20 18:59:20.460345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.170 [2024-11-20 18:59:20.460349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880880) on tqpair=0x81e690 00:21:58.433 [2024-11-20 18:59:20.505211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.433 [2024-11-20 18:59:20.505222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.433 [2024-11-20 18:59:20.505225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.433 [2024-11-20 18:59:20.505229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880700) on tqpair=0x81e690 00:21:58.433 [2024-11-20 18:59:20.505239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.433 [2024-11-20 18:59:20.505243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x81e690) 00:21:58.433 [2024-11-20 18:59:20.505250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.433 [2024-11-20 18:59:20.505267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880700, cid 4, qid 0 00:21:58.433 [2024-11-20 18:59:20.505357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.433 [2024-11-20 18:59:20.505363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.433 [2024-11-20 18:59:20.505366] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.433 [2024-11-20 18:59:20.505369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x81e690): datao=0, datal=3072, cccid=4 00:21:58.433 [2024-11-20 18:59:20.505373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x880700) on tqpair(0x81e690): expected_datao=0, payload_size=3072 00:21:58.434 [2024-11-20 18:59:20.505377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.505389] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.505393] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.546338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.434 [2024-11-20 18:59:20.546349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.434 [2024-11-20 18:59:20.546352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.546356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880700) on tqpair=0x81e690 00:21:58.434 [2024-11-20 18:59:20.546365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.546369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x81e690) 00:21:58.434 [2024-11-20 18:59:20.546375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.434 [2024-11-20 18:59:20.546395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880700, cid 4, qid 0 00:21:58.434 [2024-11-20 18:59:20.546534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.434 [2024-11-20 18:59:20.546540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.434 [2024-11-20 18:59:20.546543] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.546546] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x81e690): datao=0, datal=8, cccid=4 00:21:58.434 [2024-11-20 18:59:20.546549] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x880700) on tqpair(0x81e690): expected_datao=0, payload_size=8 00:21:58.434 [2024-11-20 18:59:20.546553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.546559] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.546562] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.587265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.434 [2024-11-20 18:59:20.587275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.434 [2024-11-20 18:59:20.587278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.434 [2024-11-20 18:59:20.587281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880700) on tqpair=0x81e690 00:21:58.434 ===================================================== 00:21:58.434 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:58.434 ===================================================== 00:21:58.434 Controller Capabilities/Features 00:21:58.434 ================================ 00:21:58.434 Vendor ID: 0000 00:21:58.434 Subsystem Vendor ID: 0000 00:21:58.434 Serial Number: .................... 00:21:58.434 Model Number: ........................................ 00:21:58.434 Firmware Version: 25.01 00:21:58.434 Recommended Arb Burst: 0 00:21:58.434 IEEE OUI Identifier: 00 00 00 00:21:58.434 Multi-path I/O 00:21:58.434 May have multiple subsystem ports: No 00:21:58.434 May have multiple controllers: No 00:21:58.434 Associated with SR-IOV VF: No 00:21:58.434 Max Data Transfer Size: 131072 00:21:58.434 Max Number of Namespaces: 0 00:21:58.434 Max Number of I/O Queues: 1024 00:21:58.434 NVMe Specification Version (VS): 1.3 00:21:58.434 NVMe Specification Version (Identify): 1.3 00:21:58.434 Maximum Queue Entries: 128 00:21:58.434 Contiguous Queues Required: Yes 00:21:58.434 Arbitration Mechanisms Supported 00:21:58.434 Weighted Round Robin: Not Supported 00:21:58.434 Vendor Specific: Not Supported 00:21:58.434 Reset Timeout: 15000 ms 00:21:58.434 Doorbell Stride: 4 bytes 00:21:58.434 NVM Subsystem Reset: Not Supported 00:21:58.434 Command Sets Supported 00:21:58.434 NVM Command Set: Supported 00:21:58.434 Boot Partition: Not Supported 00:21:58.434 Memory Page Size Minimum: 4096 bytes 00:21:58.434 Memory Page Size Maximum: 4096 bytes 00:21:58.434 Persistent Memory Region: Not Supported 00:21:58.434 Optional Asynchronous Events Supported 00:21:58.434 Namespace Attribute Notices: Not Supported 00:21:58.434 Firmware Activation Notices: Not Supported 00:21:58.434 ANA Change Notices: Not Supported 00:21:58.434 PLE Aggregate Log Change Notices: Not Supported 00:21:58.434 LBA Status Info Alert Notices: Not Supported 00:21:58.434 EGE Aggregate Log Change Notices: Not Supported 00:21:58.434 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.434 Zone Descriptor Change Notices: Not Supported 00:21:58.434 Discovery Log Change Notices: Supported 00:21:58.434 Controller Attributes 00:21:58.434 128-bit Host Identifier: Not Supported 00:21:58.434 Non-Operational Permissive Mode: Not Supported 00:21:58.434 NVM Sets: Not Supported 00:21:58.434 Read Recovery Levels: Not Supported 00:21:58.434 Endurance Groups: Not Supported 00:21:58.434 Predictable Latency Mode: Not Supported 00:21:58.434 Traffic Based Keep ALive: Not Supported 00:21:58.434 Namespace Granularity: Not Supported 00:21:58.434 SQ Associations: Not Supported 00:21:58.434 UUID List: Not Supported 00:21:58.434 Multi-Domain Subsystem: Not Supported 00:21:58.434 Fixed Capacity Management: Not Supported 00:21:58.434 Variable Capacity Management: Not Supported 00:21:58.434 Delete Endurance Group: Not Supported 00:21:58.434 Delete NVM Set: Not Supported 00:21:58.434 Extended LBA Formats Supported: Not Supported 00:21:58.434 Flexible Data Placement Supported: Not Supported 00:21:58.434 00:21:58.434 Controller Memory Buffer Support 00:21:58.434 ================================ 00:21:58.434 Supported: No 00:21:58.434 00:21:58.434 Persistent Memory Region Support 00:21:58.434 ================================ 00:21:58.434 Supported: No 00:21:58.434 00:21:58.434 Admin Command Set Attributes 00:21:58.434 ============================ 00:21:58.434 Security Send/Receive: Not Supported 00:21:58.434 Format NVM: Not Supported 00:21:58.434 Firmware Activate/Download: Not Supported 00:21:58.434 Namespace Management: Not Supported 00:21:58.434 Device Self-Test: Not Supported 00:21:58.434 Directives: Not Supported 00:21:58.434 NVMe-MI: Not Supported 00:21:58.434 Virtualization Management: Not Supported 00:21:58.434 Doorbell Buffer Config: Not Supported 00:21:58.434 Get LBA Status Capability: Not Supported 00:21:58.434 Command & Feature Lockdown Capability: Not Supported 00:21:58.434 Abort Command Limit: 1 00:21:58.434 Async Event Request Limit: 4 00:21:58.434 Number of Firmware Slots: N/A 00:21:58.434 Firmware Slot 1 Read-Only: N/A 00:21:58.434 Firmware Activation Without Reset: N/A 00:21:58.434 Multiple Update Detection Support: N/A 00:21:58.434 Firmware Update Granularity: No Information Provided 00:21:58.434 Per-Namespace SMART Log: No 00:21:58.434 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.434 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:58.434 Command Effects Log Page: Not Supported 00:21:58.434 Get Log Page Extended Data: Supported 00:21:58.434 Telemetry Log Pages: Not Supported 00:21:58.434 Persistent Event Log Pages: Not Supported 00:21:58.434 Supported Log Pages Log Page: May Support 00:21:58.434 Commands Supported & Effects Log Page: Not Supported 00:21:58.434 Feature Identifiers & Effects Log Page:May Support 00:21:58.434 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.434 Data Area 4 for Telemetry Log: Not Supported 00:21:58.434 Error Log Page Entries Supported: 128 00:21:58.434 Keep Alive: Not Supported 00:21:58.434 00:21:58.434 NVM Command Set Attributes 00:21:58.434 ========================== 00:21:58.434 Submission Queue Entry Size 00:21:58.434 Max: 1 00:21:58.434 Min: 1 00:21:58.434 Completion Queue Entry Size 00:21:58.434 Max: 1 00:21:58.434 Min: 1 00:21:58.434 Number of Namespaces: 0 00:21:58.434 Compare Command: Not Supported 00:21:58.434 Write Uncorrectable Command: Not Supported 00:21:58.434 Dataset Management Command: Not Supported 00:21:58.434 Write Zeroes Command: Not Supported 00:21:58.434 Set Features Save Field: Not Supported 00:21:58.434 Reservations: Not Supported 00:21:58.434 Timestamp: Not Supported 00:21:58.434 Copy: Not Supported 00:21:58.434 Volatile Write Cache: Not Present 00:21:58.434 Atomic Write Unit (Normal): 1 00:21:58.434 Atomic Write Unit (PFail): 1 00:21:58.434 Atomic Compare & Write Unit: 1 00:21:58.434 Fused Compare & Write: Supported 00:21:58.434 Scatter-Gather List 00:21:58.434 SGL Command Set: Supported 00:21:58.434 SGL Keyed: Supported 00:21:58.434 SGL Bit Bucket Descriptor: Not Supported 00:21:58.434 SGL Metadata Pointer: Not Supported 00:21:58.434 Oversized SGL: Not Supported 00:21:58.434 SGL Metadata Address: Not Supported 00:21:58.434 SGL Offset: Supported 00:21:58.434 Transport SGL Data Block: Not Supported 00:21:58.434 Replay Protected Memory Block: Not Supported 00:21:58.434 00:21:58.434 Firmware Slot Information 00:21:58.434 ========================= 00:21:58.434 Active slot: 0 00:21:58.434 00:21:58.434 00:21:58.434 Error Log 00:21:58.434 ========= 00:21:58.434 00:21:58.434 Active Namespaces 00:21:58.434 ================= 00:21:58.434 Discovery Log Page 00:21:58.435 ================== 00:21:58.435 Generation Counter: 2 00:21:58.435 Number of Records: 2 00:21:58.435 Record Format: 0 00:21:58.435 00:21:58.435 Discovery Log Entry 0 00:21:58.435 ---------------------- 00:21:58.435 Transport Type: 3 (TCP) 00:21:58.435 Address Family: 1 (IPv4) 00:21:58.435 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:58.435 Entry Flags: 00:21:58.435 Duplicate Returned Information: 1 00:21:58.435 Explicit Persistent Connection Support for Discovery: 1 00:21:58.435 Transport Requirements: 00:21:58.435 Secure Channel: Not Required 00:21:58.435 Port ID: 0 (0x0000) 00:21:58.435 Controller ID: 65535 (0xffff) 00:21:58.435 Admin Max SQ Size: 128 00:21:58.435 Transport Service Identifier: 4420 00:21:58.435 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:58.435 Transport Address: 10.0.0.2 00:21:58.435 Discovery Log Entry 1 00:21:58.435 ---------------------- 00:21:58.435 Transport Type: 3 (TCP) 00:21:58.435 Address Family: 1 (IPv4) 00:21:58.435 Subsystem Type: 2 (NVM Subsystem) 00:21:58.435 Entry Flags: 00:21:58.435 Duplicate Returned Information: 0 00:21:58.435 Explicit Persistent Connection Support for Discovery: 0 00:21:58.435 Transport Requirements: 00:21:58.435 Secure Channel: Not Required 00:21:58.435 Port ID: 0 (0x0000) 00:21:58.435 Controller ID: 65535 (0xffff) 00:21:58.435 Admin Max SQ Size: 128 00:21:58.435 Transport Service Identifier: 4420 00:21:58.435 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:58.435 Transport Address: 10.0.0.2 [2024-11-20 18:59:20.587362] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:58.435 [2024-11-20 18:59:20.587372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880100) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.435 [2024-11-20 18:59:20.587382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880280) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.435 [2024-11-20 18:59:20.587391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880400) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.435 [2024-11-20 18:59:20.587399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.435 [2024-11-20 18:59:20.587412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.435 [2024-11-20 18:59:20.587425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.435 [2024-11-20 18:59:20.587440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.435 [2024-11-20 18:59:20.587506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.435 [2024-11-20 18:59:20.587512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.435 [2024-11-20 18:59:20.587515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.435 [2024-11-20 18:59:20.587536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.435 [2024-11-20 18:59:20.587550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.435 [2024-11-20 18:59:20.587654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.435 [2024-11-20 18:59:20.587660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.435 [2024-11-20 18:59:20.587663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587670] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:58.435 [2024-11-20 18:59:20.587674] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:58.435 [2024-11-20 18:59:20.587682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.435 [2024-11-20 18:59:20.587695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.435 [2024-11-20 18:59:20.587704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.435 [2024-11-20 18:59:20.587806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.435 [2024-11-20 18:59:20.587811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.435 [2024-11-20 18:59:20.587814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.435 [2024-11-20 18:59:20.587838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.435 [2024-11-20 18:59:20.587847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.435 [2024-11-20 18:59:20.587910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.435 [2024-11-20 18:59:20.587915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.435 [2024-11-20 18:59:20.587918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.587929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.587936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.435 [2024-11-20 18:59:20.587941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.435 [2024-11-20 18:59:20.587950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.435 [2024-11-20 18:59:20.588060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.435 [2024-11-20 18:59:20.588065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.435 [2024-11-20 18:59:20.588068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.588071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.588079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.588083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.588086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.435 [2024-11-20 18:59:20.588093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.435 [2024-11-20 18:59:20.588102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.435 [2024-11-20 18:59:20.592211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.435 [2024-11-20 18:59:20.592219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.435 [2024-11-20 18:59:20.592222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.592225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.592234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.592238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.592241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x81e690) 00:21:58.435 [2024-11-20 18:59:20.592247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.435 [2024-11-20 18:59:20.592258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x880580, cid 3, qid 0 00:21:58.435 [2024-11-20 18:59:20.592322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.435 [2024-11-20 18:59:20.592328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.435 [2024-11-20 18:59:20.592330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.435 [2024-11-20 18:59:20.592334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x880580) on tqpair=0x81e690 00:21:58.435 [2024-11-20 18:59:20.592340] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:21:58.435 00:21:58.435 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:58.435 [2024-11-20 18:59:20.629891] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:21:58.436 [2024-11-20 18:59:20.629926] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724929 ] 00:21:58.436 [2024-11-20 18:59:20.670388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:58.436 [2024-11-20 18:59:20.670426] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.436 [2024-11-20 18:59:20.670430] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.436 [2024-11-20 18:59:20.670442] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.436 [2024-11-20 18:59:20.670451] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.436 [2024-11-20 18:59:20.674377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:58.436 [2024-11-20 18:59:20.674403] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2403690 0 00:21:58.436 [2024-11-20 18:59:20.682215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.436 [2024-11-20 18:59:20.682228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.436 [2024-11-20 18:59:20.682232] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.436 [2024-11-20 18:59:20.682235] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.436 [2024-11-20 18:59:20.682262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.682267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.682270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.682280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.436 [2024-11-20 18:59:20.682295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.436 [2024-11-20 18:59:20.690213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.436 [2024-11-20 18:59:20.690221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.436 [2024-11-20 18:59:20.690224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.436 [2024-11-20 18:59:20.690235] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.436 [2024-11-20 18:59:20.690241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:58.436 [2024-11-20 18:59:20.690245] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:58.436 [2024-11-20 18:59:20.690255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.690268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.436 [2024-11-20 18:59:20.690280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.436 [2024-11-20 18:59:20.690412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.436 [2024-11-20 18:59:20.690418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.436 [2024-11-20 18:59:20.690421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.436 [2024-11-20 18:59:20.690428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:58.436 [2024-11-20 18:59:20.690434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:58.436 [2024-11-20 18:59:20.690440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.690452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.436 [2024-11-20 18:59:20.690462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.436 [2024-11-20 18:59:20.690544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.436 [2024-11-20 18:59:20.690550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.436 [2024-11-20 18:59:20.690553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.436 [2024-11-20 18:59:20.690560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:58.436 [2024-11-20 18:59:20.690567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.436 [2024-11-20 18:59:20.690572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.690586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.436 [2024-11-20 18:59:20.690596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.436 [2024-11-20 18:59:20.690695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.436 [2024-11-20 18:59:20.690701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.436 [2024-11-20 18:59:20.690704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.436 [2024-11-20 18:59:20.690711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.436 [2024-11-20 18:59:20.690719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.690731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.436 [2024-11-20 18:59:20.690740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.436 [2024-11-20 18:59:20.690848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.436 [2024-11-20 18:59:20.690853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.436 [2024-11-20 18:59:20.690856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.436 [2024-11-20 18:59:20.690863] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:58.436 [2024-11-20 18:59:20.690867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:58.436 [2024-11-20 18:59:20.690873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.436 [2024-11-20 18:59:20.690981] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:58.436 [2024-11-20 18:59:20.690985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.436 [2024-11-20 18:59:20.690991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.690997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.691003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.436 [2024-11-20 18:59:20.691012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.436 [2024-11-20 18:59:20.691074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.436 [2024-11-20 18:59:20.691080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.436 [2024-11-20 18:59:20.691083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.691086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.436 [2024-11-20 18:59:20.691090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.436 [2024-11-20 18:59:20.691099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.691103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.691106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.691111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.436 [2024-11-20 18:59:20.691121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.436 [2024-11-20 18:59:20.691228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.436 [2024-11-20 18:59:20.691234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.436 [2024-11-20 18:59:20.691237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.691240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.436 [2024-11-20 18:59:20.691244] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.436 [2024-11-20 18:59:20.691248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:58.436 [2024-11-20 18:59:20.691255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:58.436 [2024-11-20 18:59:20.691263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.436 [2024-11-20 18:59:20.691271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.436 [2024-11-20 18:59:20.691274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.436 [2024-11-20 18:59:20.691280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.436 [2024-11-20 18:59:20.691289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.437 [2024-11-20 18:59:20.691384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.437 [2024-11-20 18:59:20.691390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.437 [2024-11-20 18:59:20.691392] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691396] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=4096, cccid=0 00:21:58.437 [2024-11-20 18:59:20.691400] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465100) on tqpair(0x2403690): expected_datao=0, payload_size=4096 00:21:58.437 [2024-11-20 18:59:20.691403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691409] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691412] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.437 [2024-11-20 18:59:20.691435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.437 [2024-11-20 18:59:20.691438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.437 [2024-11-20 18:59:20.691447] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:58.437 [2024-11-20 18:59:20.691451] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:58.437 [2024-11-20 18:59:20.691455] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:58.437 [2024-11-20 18:59:20.691460] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:58.437 [2024-11-20 18:59:20.691464] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:58.437 [2024-11-20 18:59:20.691470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.691478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.691483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.691496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.437 [2024-11-20 18:59:20.691506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.437 [2024-11-20 18:59:20.691582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.437 [2024-11-20 18:59:20.691588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.437 [2024-11-20 18:59:20.691591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.437 [2024-11-20 18:59:20.691599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.691610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.437 [2024-11-20 18:59:20.691615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.691626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.437 [2024-11-20 18:59:20.691631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.691642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.437 [2024-11-20 18:59:20.691647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.691658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.437 [2024-11-20 18:59:20.691662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.691670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.691675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.691684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.437 [2024-11-20 18:59:20.691697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465100, cid 0, qid 0 00:21:58.437 [2024-11-20 18:59:20.691702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465280, cid 1, qid 0 00:21:58.437 [2024-11-20 18:59:20.691706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465400, cid 2, qid 0 00:21:58.437 [2024-11-20 18:59:20.691709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465580, cid 3, qid 0 00:21:58.437 [2024-11-20 18:59:20.691713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465700, cid 4, qid 0 00:21:58.437 [2024-11-20 18:59:20.691833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.437 [2024-11-20 18:59:20.691839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.437 [2024-11-20 18:59:20.691842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465700) on tqpair=0x2403690 00:21:58.437 [2024-11-20 18:59:20.691850] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:58.437 [2024-11-20 18:59:20.691855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.691862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.691868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.691873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.691884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.437 [2024-11-20 18:59:20.691894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465700, cid 4, qid 0 00:21:58.437 [2024-11-20 18:59:20.691955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.437 [2024-11-20 18:59:20.691960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.437 [2024-11-20 18:59:20.691963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.691966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465700) on tqpair=0x2403690 00:21:58.437 [2024-11-20 18:59:20.692016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.692025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.692032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.692035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403690) 00:21:58.437 [2024-11-20 18:59:20.692040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.437 [2024-11-20 18:59:20.692049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465700, cid 4, qid 0 00:21:58.437 [2024-11-20 18:59:20.692141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.437 [2024-11-20 18:59:20.692147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.437 [2024-11-20 18:59:20.692150] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.692153] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=4096, cccid=4 00:21:58.437 [2024-11-20 18:59:20.692156] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465700) on tqpair(0x2403690): expected_datao=0, payload_size=4096 00:21:58.437 [2024-11-20 18:59:20.692162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.692167] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.692171] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.692186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.437 [2024-11-20 18:59:20.692192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.437 [2024-11-20 18:59:20.692194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.437 [2024-11-20 18:59:20.692198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465700) on tqpair=0x2403690 00:21:58.437 [2024-11-20 18:59:20.692210] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:58.437 [2024-11-20 18:59:20.692218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.692227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:58.437 [2024-11-20 18:59:20.692232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403690) 00:21:58.438 [2024-11-20 18:59:20.692241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.438 [2024-11-20 18:59:20.692251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465700, cid 4, qid 0 00:21:58.438 [2024-11-20 18:59:20.692340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.438 [2024-11-20 18:59:20.692346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.438 [2024-11-20 18:59:20.692349] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692352] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=4096, cccid=4 00:21:58.438 [2024-11-20 18:59:20.692355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465700) on tqpair(0x2403690): expected_datao=0, payload_size=4096 00:21:58.438 [2024-11-20 18:59:20.692359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692364] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692368] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.438 [2024-11-20 18:59:20.692381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.438 [2024-11-20 18:59:20.692384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465700) on tqpair=0x2403690 00:21:58.438 [2024-11-20 18:59:20.692397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403690) 00:21:58.438 [2024-11-20 18:59:20.692420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.438 [2024-11-20 18:59:20.692430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465700, cid 4, qid 0 00:21:58.438 [2024-11-20 18:59:20.692543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.438 [2024-11-20 18:59:20.692548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.438 [2024-11-20 18:59:20.692553] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692556] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=4096, cccid=4 00:21:58.438 [2024-11-20 18:59:20.692560] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465700) on tqpair(0x2403690): expected_datao=0, payload_size=4096 00:21:58.438 [2024-11-20 18:59:20.692563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692569] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692572] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.438 [2024-11-20 18:59:20.692585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.438 [2024-11-20 18:59:20.692588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465700) on tqpair=0x2403690 00:21:58.438 [2024-11-20 18:59:20.692597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692630] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:58.438 [2024-11-20 18:59:20.692634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:58.438 [2024-11-20 18:59:20.692639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:58.438 [2024-11-20 18:59:20.692649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403690) 00:21:58.438 [2024-11-20 18:59:20.692658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.438 [2024-11-20 18:59:20.692664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2403690) 00:21:58.438 [2024-11-20 18:59:20.692675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.438 [2024-11-20 18:59:20.692688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465700, cid 4, qid 0 00:21:58.438 [2024-11-20 18:59:20.692692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465880, cid 5, qid 0 00:21:58.438 [2024-11-20 18:59:20.692808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.438 [2024-11-20 18:59:20.692814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.438 [2024-11-20 18:59:20.692817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465700) on tqpair=0x2403690 00:21:58.438 [2024-11-20 18:59:20.692826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.438 [2024-11-20 18:59:20.692831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.438 [2024-11-20 18:59:20.692834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465880) on tqpair=0x2403690 00:21:58.438 [2024-11-20 18:59:20.692845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2403690) 00:21:58.438 [2024-11-20 18:59:20.692854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.438 [2024-11-20 18:59:20.692863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465880, cid 5, qid 0 00:21:58.438 [2024-11-20 18:59:20.692959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.438 [2024-11-20 18:59:20.692965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.438 [2024-11-20 18:59:20.692967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465880) on tqpair=0x2403690 00:21:58.438 [2024-11-20 18:59:20.692978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.692981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2403690) 00:21:58.438 [2024-11-20 18:59:20.692987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.438 [2024-11-20 18:59:20.692996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465880, cid 5, qid 0 00:21:58.438 [2024-11-20 18:59:20.693076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.438 [2024-11-20 18:59:20.693081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.438 [2024-11-20 18:59:20.693084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.693088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465880) on tqpair=0x2403690 00:21:58.438 [2024-11-20 18:59:20.693096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.693100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2403690) 00:21:58.438 [2024-11-20 18:59:20.693105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.438 [2024-11-20 18:59:20.693114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465880, cid 5, qid 0 00:21:58.438 [2024-11-20 18:59:20.693173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.438 [2024-11-20 18:59:20.693178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.438 [2024-11-20 18:59:20.693181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.438 [2024-11-20 18:59:20.693184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465880) on tqpair=0x2403690 00:21:58.439 [2024-11-20 18:59:20.693196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2403690) 00:21:58.439 [2024-11-20 18:59:20.693211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.439 [2024-11-20 18:59:20.693217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2403690) 00:21:58.439 [2024-11-20 18:59:20.693225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.439 [2024-11-20 18:59:20.693231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2403690) 00:21:58.439 [2024-11-20 18:59:20.693243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.439 [2024-11-20 18:59:20.693249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2403690) 00:21:58.439 [2024-11-20 18:59:20.693257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.439 [2024-11-20 18:59:20.693268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465880, cid 5, qid 0 00:21:58.439 [2024-11-20 18:59:20.693272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465700, cid 4, qid 0 00:21:58.439 [2024-11-20 18:59:20.693276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465a00, cid 6, qid 0 00:21:58.439 [2024-11-20 18:59:20.693280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465b80, cid 7, qid 0 00:21:58.439 [2024-11-20 18:59:20.693418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.439 [2024-11-20 18:59:20.693424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.439 [2024-11-20 18:59:20.693427] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693430] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=8192, cccid=5 00:21:58.439 [2024-11-20 18:59:20.693434] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465880) on tqpair(0x2403690): expected_datao=0, payload_size=8192 00:21:58.439 [2024-11-20 18:59:20.693437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693473] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693477] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.439 [2024-11-20 18:59:20.693487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.439 [2024-11-20 18:59:20.693490] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693493] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=512, cccid=4 00:21:58.439 [2024-11-20 18:59:20.693496] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465700) on tqpair(0x2403690): expected_datao=0, payload_size=512 00:21:58.439 [2024-11-20 18:59:20.693500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693505] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693508] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.439 [2024-11-20 18:59:20.693518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.439 [2024-11-20 18:59:20.693520] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693523] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=512, cccid=6 00:21:58.439 [2024-11-20 18:59:20.693527] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465a00) on tqpair(0x2403690): expected_datao=0, payload_size=512 00:21:58.439 [2024-11-20 18:59:20.693531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693536] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693539] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.439 [2024-11-20 18:59:20.693548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.439 [2024-11-20 18:59:20.693551] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693556] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2403690): datao=0, datal=4096, cccid=7 00:21:58.439 [2024-11-20 18:59:20.693559] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2465b80) on tqpair(0x2403690): expected_datao=0, payload_size=4096 00:21:58.439 [2024-11-20 18:59:20.693563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693568] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693571] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.439 [2024-11-20 18:59:20.693583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.439 [2024-11-20 18:59:20.693586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465880) on tqpair=0x2403690 00:21:58.439 [2024-11-20 18:59:20.693599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.439 [2024-11-20 18:59:20.693604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.439 [2024-11-20 18:59:20.693607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465700) on tqpair=0x2403690 00:21:58.439 [2024-11-20 18:59:20.693618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.439 [2024-11-20 18:59:20.693623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.439 [2024-11-20 18:59:20.693626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465a00) on tqpair=0x2403690 00:21:58.439 [2024-11-20 18:59:20.693634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.439 [2024-11-20 18:59:20.693639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.439 [2024-11-20 18:59:20.693642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.439 [2024-11-20 18:59:20.693645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465b80) on tqpair=0x2403690 00:21:58.439 ===================================================== 00:21:58.439 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.439 ===================================================== 00:21:58.439 Controller Capabilities/Features 00:21:58.439 ================================ 00:21:58.439 Vendor ID: 8086 00:21:58.439 Subsystem Vendor ID: 8086 00:21:58.439 Serial Number: SPDK00000000000001 00:21:58.439 Model Number: SPDK bdev Controller 00:21:58.439 Firmware Version: 25.01 00:21:58.439 Recommended Arb Burst: 6 00:21:58.439 IEEE OUI Identifier: e4 d2 5c 00:21:58.439 Multi-path I/O 00:21:58.439 May have multiple subsystem ports: Yes 00:21:58.439 May have multiple controllers: Yes 00:21:58.439 Associated with SR-IOV VF: No 00:21:58.439 Max Data Transfer Size: 131072 00:21:58.439 Max Number of Namespaces: 32 00:21:58.439 Max Number of I/O Queues: 127 00:21:58.439 NVMe Specification Version (VS): 1.3 00:21:58.439 NVMe Specification Version (Identify): 1.3 00:21:58.439 Maximum Queue Entries: 128 00:21:58.439 Contiguous Queues Required: Yes 00:21:58.439 Arbitration Mechanisms Supported 00:21:58.439 Weighted Round Robin: Not Supported 00:21:58.439 Vendor Specific: Not Supported 00:21:58.439 Reset Timeout: 15000 ms 00:21:58.439 Doorbell Stride: 4 bytes 00:21:58.439 NVM Subsystem Reset: Not Supported 00:21:58.439 Command Sets Supported 00:21:58.439 NVM Command Set: Supported 00:21:58.439 Boot Partition: Not Supported 00:21:58.439 Memory Page Size Minimum: 4096 bytes 00:21:58.439 Memory Page Size Maximum: 4096 bytes 00:21:58.439 Persistent Memory Region: Not Supported 00:21:58.439 Optional Asynchronous Events Supported 00:21:58.439 Namespace Attribute Notices: Supported 00:21:58.439 Firmware Activation Notices: Not Supported 00:21:58.439 ANA Change Notices: Not Supported 00:21:58.439 PLE Aggregate Log Change Notices: Not Supported 00:21:58.439 LBA Status Info Alert Notices: Not Supported 00:21:58.439 EGE Aggregate Log Change Notices: Not Supported 00:21:58.439 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.439 Zone Descriptor Change Notices: Not Supported 00:21:58.439 Discovery Log Change Notices: Not Supported 00:21:58.439 Controller Attributes 00:21:58.439 128-bit Host Identifier: Supported 00:21:58.439 Non-Operational Permissive Mode: Not Supported 00:21:58.439 NVM Sets: Not Supported 00:21:58.439 Read Recovery Levels: Not Supported 00:21:58.439 Endurance Groups: Not Supported 00:21:58.439 Predictable Latency Mode: Not Supported 00:21:58.439 Traffic Based Keep ALive: Not Supported 00:21:58.439 Namespace Granularity: Not Supported 00:21:58.439 SQ Associations: Not Supported 00:21:58.439 UUID List: Not Supported 00:21:58.439 Multi-Domain Subsystem: Not Supported 00:21:58.439 Fixed Capacity Management: Not Supported 00:21:58.439 Variable Capacity Management: Not Supported 00:21:58.439 Delete Endurance Group: Not Supported 00:21:58.439 Delete NVM Set: Not Supported 00:21:58.439 Extended LBA Formats Supported: Not Supported 00:21:58.439 Flexible Data Placement Supported: Not Supported 00:21:58.439 00:21:58.439 Controller Memory Buffer Support 00:21:58.439 ================================ 00:21:58.439 Supported: No 00:21:58.439 00:21:58.439 Persistent Memory Region Support 00:21:58.439 ================================ 00:21:58.439 Supported: No 00:21:58.439 00:21:58.439 Admin Command Set Attributes 00:21:58.440 ============================ 00:21:58.440 Security Send/Receive: Not Supported 00:21:58.440 Format NVM: Not Supported 00:21:58.440 Firmware Activate/Download: Not Supported 00:21:58.440 Namespace Management: Not Supported 00:21:58.440 Device Self-Test: Not Supported 00:21:58.440 Directives: Not Supported 00:21:58.440 NVMe-MI: Not Supported 00:21:58.440 Virtualization Management: Not Supported 00:21:58.440 Doorbell Buffer Config: Not Supported 00:21:58.440 Get LBA Status Capability: Not Supported 00:21:58.440 Command & Feature Lockdown Capability: Not Supported 00:21:58.440 Abort Command Limit: 4 00:21:58.440 Async Event Request Limit: 4 00:21:58.440 Number of Firmware Slots: N/A 00:21:58.440 Firmware Slot 1 Read-Only: N/A 00:21:58.440 Firmware Activation Without Reset: N/A 00:21:58.440 Multiple Update Detection Support: N/A 00:21:58.440 Firmware Update Granularity: No Information Provided 00:21:58.440 Per-Namespace SMART Log: No 00:21:58.440 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.440 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:58.440 Command Effects Log Page: Supported 00:21:58.440 Get Log Page Extended Data: Supported 00:21:58.440 Telemetry Log Pages: Not Supported 00:21:58.440 Persistent Event Log Pages: Not Supported 00:21:58.440 Supported Log Pages Log Page: May Support 00:21:58.440 Commands Supported & Effects Log Page: Not Supported 00:21:58.440 Feature Identifiers & Effects Log Page:May Support 00:21:58.440 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.440 Data Area 4 for Telemetry Log: Not Supported 00:21:58.440 Error Log Page Entries Supported: 128 00:21:58.440 Keep Alive: Supported 00:21:58.440 Keep Alive Granularity: 10000 ms 00:21:58.440 00:21:58.440 NVM Command Set Attributes 00:21:58.440 ========================== 00:21:58.440 Submission Queue Entry Size 00:21:58.440 Max: 64 00:21:58.440 Min: 64 00:21:58.440 Completion Queue Entry Size 00:21:58.440 Max: 16 00:21:58.440 Min: 16 00:21:58.440 Number of Namespaces: 32 00:21:58.440 Compare Command: Supported 00:21:58.440 Write Uncorrectable Command: Not Supported 00:21:58.440 Dataset Management Command: Supported 00:21:58.440 Write Zeroes Command: Supported 00:21:58.440 Set Features Save Field: Not Supported 00:21:58.440 Reservations: Supported 00:21:58.440 Timestamp: Not Supported 00:21:58.440 Copy: Supported 00:21:58.440 Volatile Write Cache: Present 00:21:58.440 Atomic Write Unit (Normal): 1 00:21:58.440 Atomic Write Unit (PFail): 1 00:21:58.440 Atomic Compare & Write Unit: 1 00:21:58.440 Fused Compare & Write: Supported 00:21:58.440 Scatter-Gather List 00:21:58.440 SGL Command Set: Supported 00:21:58.440 SGL Keyed: Supported 00:21:58.440 SGL Bit Bucket Descriptor: Not Supported 00:21:58.440 SGL Metadata Pointer: Not Supported 00:21:58.440 Oversized SGL: Not Supported 00:21:58.440 SGL Metadata Address: Not Supported 00:21:58.440 SGL Offset: Supported 00:21:58.440 Transport SGL Data Block: Not Supported 00:21:58.440 Replay Protected Memory Block: Not Supported 00:21:58.440 00:21:58.440 Firmware Slot Information 00:21:58.440 ========================= 00:21:58.440 Active slot: 1 00:21:58.440 Slot 1 Firmware Revision: 25.01 00:21:58.440 00:21:58.440 00:21:58.440 Commands Supported and Effects 00:21:58.440 ============================== 00:21:58.440 Admin Commands 00:21:58.440 -------------- 00:21:58.440 Get Log Page (02h): Supported 00:21:58.440 Identify (06h): Supported 00:21:58.440 Abort (08h): Supported 00:21:58.440 Set Features (09h): Supported 00:21:58.440 Get Features (0Ah): Supported 00:21:58.440 Asynchronous Event Request (0Ch): Supported 00:21:58.440 Keep Alive (18h): Supported 00:21:58.440 I/O Commands 00:21:58.440 ------------ 00:21:58.440 Flush (00h): Supported LBA-Change 00:21:58.440 Write (01h): Supported LBA-Change 00:21:58.440 Read (02h): Supported 00:21:58.440 Compare (05h): Supported 00:21:58.440 Write Zeroes (08h): Supported LBA-Change 00:21:58.440 Dataset Management (09h): Supported LBA-Change 00:21:58.440 Copy (19h): Supported LBA-Change 00:21:58.440 00:21:58.440 Error Log 00:21:58.440 ========= 00:21:58.440 00:21:58.440 Arbitration 00:21:58.440 =========== 00:21:58.440 Arbitration Burst: 1 00:21:58.440 00:21:58.440 Power Management 00:21:58.440 ================ 00:21:58.440 Number of Power States: 1 00:21:58.440 Current Power State: Power State #0 00:21:58.440 Power State #0: 00:21:58.440 Max Power: 0.00 W 00:21:58.440 Non-Operational State: Operational 00:21:58.440 Entry Latency: Not Reported 00:21:58.440 Exit Latency: Not Reported 00:21:58.440 Relative Read Throughput: 0 00:21:58.440 Relative Read Latency: 0 00:21:58.440 Relative Write Throughput: 0 00:21:58.440 Relative Write Latency: 0 00:21:58.440 Idle Power: Not Reported 00:21:58.440 Active Power: Not Reported 00:21:58.440 Non-Operational Permissive Mode: Not Supported 00:21:58.440 00:21:58.440 Health Information 00:21:58.440 ================== 00:21:58.440 Critical Warnings: 00:21:58.440 Available Spare Space: OK 00:21:58.440 Temperature: OK 00:21:58.440 Device Reliability: OK 00:21:58.440 Read Only: No 00:21:58.440 Volatile Memory Backup: OK 00:21:58.440 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:58.440 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:58.440 Available Spare: 0% 00:21:58.440 Available Spare Threshold: 0% 00:21:58.440 Life Percentage Used:[2024-11-20 18:59:20.693722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.693727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2403690) 00:21:58.440 [2024-11-20 18:59:20.693733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.440 [2024-11-20 18:59:20.693744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465b80, cid 7, qid 0 00:21:58.440 [2024-11-20 18:59:20.693813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.440 [2024-11-20 18:59:20.693818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.440 [2024-11-20 18:59:20.693821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.693824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465b80) on tqpair=0x2403690 00:21:58.440 [2024-11-20 18:59:20.693851] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:58.440 [2024-11-20 18:59:20.693860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465100) on tqpair=0x2403690 00:21:58.440 [2024-11-20 18:59:20.693865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.440 [2024-11-20 18:59:20.693870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465280) on tqpair=0x2403690 00:21:58.440 [2024-11-20 18:59:20.693873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.440 [2024-11-20 18:59:20.693877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465400) on tqpair=0x2403690 00:21:58.440 [2024-11-20 18:59:20.693881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.440 [2024-11-20 18:59:20.693888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465580) on tqpair=0x2403690 00:21:58.440 [2024-11-20 18:59:20.693892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.440 [2024-11-20 18:59:20.693898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.693902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.693905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403690) 00:21:58.440 [2024-11-20 18:59:20.693910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.440 [2024-11-20 18:59:20.693922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465580, cid 3, qid 0 00:21:58.440 [2024-11-20 18:59:20.694012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.440 [2024-11-20 18:59:20.694018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.440 [2024-11-20 18:59:20.694021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.694024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465580) on tqpair=0x2403690 00:21:58.440 [2024-11-20 18:59:20.694029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.694033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.694036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403690) 00:21:58.440 [2024-11-20 18:59:20.694041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.440 [2024-11-20 18:59:20.694052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465580, cid 3, qid 0 00:21:58.440 [2024-11-20 18:59:20.694162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.440 [2024-11-20 18:59:20.694168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.440 [2024-11-20 18:59:20.694171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.440 [2024-11-20 18:59:20.694174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465580) on tqpair=0x2403690 00:21:58.440 [2024-11-20 18:59:20.694178] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:58.441 [2024-11-20 18:59:20.694181] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:58.441 [2024-11-20 18:59:20.694189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.441 [2024-11-20 18:59:20.694192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.441 [2024-11-20 18:59:20.694195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2403690) 00:21:58.441 [2024-11-20 18:59:20.698206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.441 [2024-11-20 18:59:20.698220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2465580, cid 3, qid 0 00:21:58.441 [2024-11-20 18:59:20.698387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.441 [2024-11-20 18:59:20.698392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.441 [2024-11-20 18:59:20.698395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.441 [2024-11-20 18:59:20.698398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2465580) on tqpair=0x2403690 00:21:58.441 [2024-11-20 18:59:20.698405] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:21:58.441 0% 00:21:58.441 Data Units Read: 0 00:21:58.441 Data Units Written: 0 00:21:58.441 Host Read Commands: 0 00:21:58.441 Host Write Commands: 0 00:21:58.441 Controller Busy Time: 0 minutes 00:21:58.441 Power Cycles: 0 00:21:58.441 Power On Hours: 0 hours 00:21:58.441 Unsafe Shutdowns: 0 00:21:58.441 Unrecoverable Media Errors: 0 00:21:58.441 Lifetime Error Log Entries: 0 00:21:58.441 Warning Temperature Time: 0 minutes 00:21:58.441 Critical Temperature Time: 0 minutes 00:21:58.441 00:21:58.441 Number of Queues 00:21:58.441 ================ 00:21:58.441 Number of I/O Submission Queues: 127 00:21:58.441 Number of I/O Completion Queues: 127 00:21:58.441 00:21:58.441 Active Namespaces 00:21:58.441 ================= 00:21:58.441 Namespace ID:1 00:21:58.441 Error Recovery Timeout: Unlimited 00:21:58.441 Command Set Identifier: NVM (00h) 00:21:58.441 Deallocate: Supported 00:21:58.441 Deallocated/Unwritten Error: Not Supported 00:21:58.441 Deallocated Read Value: Unknown 00:21:58.441 Deallocate in Write Zeroes: Not Supported 00:21:58.441 Deallocated Guard Field: 0xFFFF 00:21:58.441 Flush: Supported 00:21:58.441 Reservation: Supported 00:21:58.441 Namespace Sharing Capabilities: Multiple Controllers 00:21:58.441 Size (in LBAs): 131072 (0GiB) 00:21:58.441 Capacity (in LBAs): 131072 (0GiB) 00:21:58.441 Utilization (in LBAs): 131072 (0GiB) 00:21:58.441 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:58.441 EUI64: ABCDEF0123456789 00:21:58.441 UUID: d1ff3ce0-26c7-469e-98f6-2a058a2bb653 00:21:58.441 Thin Provisioning: Not Supported 00:21:58.441 Per-NS Atomic Units: Yes 00:21:58.441 Atomic Boundary Size (Normal): 0 00:21:58.441 Atomic Boundary Size (PFail): 0 00:21:58.441 Atomic Boundary Offset: 0 00:21:58.441 Maximum Single Source Range Length: 65535 00:21:58.441 Maximum Copy Length: 65535 00:21:58.441 Maximum Source Range Count: 1 00:21:58.441 NGUID/EUI64 Never Reused: No 00:21:58.441 Namespace Write Protected: No 00:21:58.441 Number of LBA Formats: 1 00:21:58.441 Current LBA Format: LBA Format #00 00:21:58.441 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:58.441 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.441 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.441 rmmod nvme_tcp 00:21:58.441 rmmod nvme_fabrics 00:21:58.441 rmmod nvme_keyring 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3724677 ']' 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3724677 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3724677 ']' 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3724677 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724677 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724677' 00:21:58.701 killing process with pid 3724677 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3724677 00:21:58.701 18:59:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3724677 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.701 18:59:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.250 00:22:01.250 real 0m9.904s 00:22:01.250 user 0m7.929s 00:22:01.250 sys 0m4.906s 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.250 ************************************ 00:22:01.250 END TEST nvmf_identify 00:22:01.250 ************************************ 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.250 ************************************ 00:22:01.250 START TEST nvmf_perf 00:22:01.250 ************************************ 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.250 * Looking for test storage... 00:22:01.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.250 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:01.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.250 --rc genhtml_branch_coverage=1 00:22:01.250 --rc genhtml_function_coverage=1 00:22:01.250 --rc genhtml_legend=1 00:22:01.250 --rc geninfo_all_blocks=1 00:22:01.251 --rc geninfo_unexecuted_blocks=1 00:22:01.251 00:22:01.251 ' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:01.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.251 --rc genhtml_branch_coverage=1 00:22:01.251 --rc genhtml_function_coverage=1 00:22:01.251 --rc genhtml_legend=1 00:22:01.251 --rc geninfo_all_blocks=1 00:22:01.251 --rc geninfo_unexecuted_blocks=1 00:22:01.251 00:22:01.251 ' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:01.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.251 --rc genhtml_branch_coverage=1 00:22:01.251 --rc genhtml_function_coverage=1 00:22:01.251 --rc genhtml_legend=1 00:22:01.251 --rc geninfo_all_blocks=1 00:22:01.251 --rc geninfo_unexecuted_blocks=1 00:22:01.251 00:22:01.251 ' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:01.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.251 --rc genhtml_branch_coverage=1 00:22:01.251 --rc genhtml_function_coverage=1 00:22:01.251 --rc genhtml_legend=1 00:22:01.251 --rc geninfo_all_blocks=1 00:22:01.251 --rc geninfo_unexecuted_blocks=1 00:22:01.251 00:22:01.251 ' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.251 18:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:07.824 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:07.824 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:07.824 Found net devices under 0000:86:00.0: cvl_0_0 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:07.824 Found net devices under 0000:86:00.1: cvl_0_1 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:22:07.824 00:22:07.824 --- 10.0.0.2 ping statistics --- 00:22:07.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.824 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:22:07.824 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:22:07.825 00:22:07.825 --- 10.0.0.1 ping statistics --- 00:22:07.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.825 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3728449 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3728449 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3728449 ']' 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.825 18:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.825 [2024-11-20 18:59:29.395250] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:22:07.825 [2024-11-20 18:59:29.395294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.825 [2024-11-20 18:59:29.474304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.825 [2024-11-20 18:59:29.513921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.825 [2024-11-20 18:59:29.513961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.825 [2024-11-20 18:59:29.513969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.825 [2024-11-20 18:59:29.513977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.825 [2024-11-20 18:59:29.513981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.825 [2024-11-20 18:59:29.515492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.825 [2024-11-20 18:59:29.515603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.825 [2024-11-20 18:59:29.515689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.825 [2024-11-20 18:59:29.515690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:08.084 18:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:11.374 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:11.374 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:11.374 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:11.374 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:11.634 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:11.634 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:11.634 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:11.634 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:11.634 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.634 [2024-11-20 18:59:33.896935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.635 18:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.894 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.894 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.153 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:12.153 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:12.412 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.412 [2024-11-20 18:59:34.732086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.671 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.671 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:12.671 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:12.671 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:12.671 18:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:14.049 Initializing NVMe Controllers 00:22:14.049 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:14.049 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:14.049 Initialization complete. Launching workers. 00:22:14.049 ======================================================== 00:22:14.049 Latency(us) 00:22:14.049 Device Information : IOPS MiB/s Average min max 00:22:14.049 PCIE (0000:5e:00.0) NSID 1 from core 0: 98358.77 384.21 324.80 28.92 8231.64 00:22:14.049 ======================================================== 00:22:14.049 Total : 98358.77 384.21 324.80 28.92 8231.64 00:22:14.049 00:22:14.049 18:59:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.426 Initializing NVMe Controllers 00:22:15.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.426 Initialization complete. Launching workers. 00:22:15.426 ======================================================== 00:22:15.426 Latency(us) 00:22:15.426 Device Information : IOPS MiB/s Average min max 00:22:15.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 118.58 0.46 8712.42 105.97 45767.17 00:22:15.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.86 0.16 25430.41 7197.19 47885.90 00:22:15.426 ======================================================== 00:22:15.426 Total : 159.44 0.62 12996.40 105.97 47885.90 00:22:15.426 00:22:15.426 18:59:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:17.329 Initializing NVMe Controllers 00:22:17.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:17.330 Initialization complete. Launching workers. 00:22:17.330 ======================================================== 00:22:17.330 Latency(us) 00:22:17.330 Device Information : IOPS MiB/s Average min max 00:22:17.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11274.99 44.04 2843.04 389.75 8770.66 00:22:17.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3885.00 15.18 8271.94 6519.88 23101.85 00:22:17.330 ======================================================== 00:22:17.330 Total : 15159.98 59.22 4234.28 389.75 23101.85 00:22:17.330 00:22:17.330 18:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:17.330 18:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:17.330 18:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:19.866 Initializing NVMe Controllers 00:22:19.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.866 Controller IO queue size 128, less than required. 00:22:19.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.866 Controller IO queue size 128, less than required. 00:22:19.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:19.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:19.866 Initialization complete. Launching workers. 00:22:19.866 ======================================================== 00:22:19.866 Latency(us) 00:22:19.866 Device Information : IOPS MiB/s Average min max 00:22:19.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1753.29 438.32 74686.21 54660.37 138749.52 00:22:19.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.43 146.86 224696.70 73568.54 352308.41 00:22:19.866 ======================================================== 00:22:19.866 Total : 2340.72 585.18 112333.01 54660.37 352308.41 00:22:19.866 00:22:19.866 18:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:19.866 No valid NVMe controllers or AIO or URING devices found 00:22:19.866 Initializing NVMe Controllers 00:22:19.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.866 Controller IO queue size 128, less than required. 00:22:19.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.866 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:19.866 Controller IO queue size 128, less than required. 00:22:19.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.866 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:19.866 WARNING: Some requested NVMe devices were skipped 00:22:19.866 18:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:22.403 Initializing NVMe Controllers 00:22:22.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.403 Controller IO queue size 128, less than required. 00:22:22.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.403 Controller IO queue size 128, less than required. 00:22:22.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:22.403 Initialization complete. Launching workers. 00:22:22.403 00:22:22.403 ==================== 00:22:22.403 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:22.403 TCP transport: 00:22:22.403 polls: 10969 00:22:22.403 idle_polls: 7591 00:22:22.403 sock_completions: 3378 00:22:22.403 nvme_completions: 6393 00:22:22.403 submitted_requests: 9530 00:22:22.403 queued_requests: 1 00:22:22.403 00:22:22.403 ==================== 00:22:22.403 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:22.403 TCP transport: 00:22:22.403 polls: 11233 00:22:22.403 idle_polls: 7521 00:22:22.403 sock_completions: 3712 00:22:22.403 nvme_completions: 6679 00:22:22.403 submitted_requests: 9984 00:22:22.403 queued_requests: 1 00:22:22.403 ======================================================== 00:22:22.403 Latency(us) 00:22:22.403 Device Information : IOPS MiB/s Average min max 00:22:22.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1597.84 399.46 81295.14 55073.32 126705.98 00:22:22.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1669.33 417.33 77584.12 46710.96 118600.09 00:22:22.403 ======================================================== 00:22:22.403 Total : 3267.17 816.79 79399.03 46710.96 126705.98 00:22:22.403 00:22:22.403 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:22.403 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.662 rmmod nvme_tcp 00:22:22.662 rmmod nvme_fabrics 00:22:22.662 rmmod nvme_keyring 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3728449 ']' 00:22:22.662 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3728449 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3728449 ']' 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3728449 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3728449 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3728449' 00:22:22.663 killing process with pid 3728449 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3728449 00:22:22.663 18:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3728449 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.198 18:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.119 00:22:27.119 real 0m25.882s 00:22:27.119 user 1m9.505s 00:22:27.119 sys 0m8.397s 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:27.119 ************************************ 00:22:27.119 END TEST nvmf_perf 00:22:27.119 ************************************ 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.119 ************************************ 00:22:27.119 START TEST nvmf_fio_host 00:22:27.119 ************************************ 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:27.119 * Looking for test storage... 00:22:27.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.119 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:27.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.120 --rc genhtml_branch_coverage=1 00:22:27.120 --rc genhtml_function_coverage=1 00:22:27.120 --rc genhtml_legend=1 00:22:27.120 --rc geninfo_all_blocks=1 00:22:27.120 --rc geninfo_unexecuted_blocks=1 00:22:27.120 00:22:27.120 ' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:27.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.120 --rc genhtml_branch_coverage=1 00:22:27.120 --rc genhtml_function_coverage=1 00:22:27.120 --rc genhtml_legend=1 00:22:27.120 --rc geninfo_all_blocks=1 00:22:27.120 --rc geninfo_unexecuted_blocks=1 00:22:27.120 00:22:27.120 ' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:27.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.120 --rc genhtml_branch_coverage=1 00:22:27.120 --rc genhtml_function_coverage=1 00:22:27.120 --rc genhtml_legend=1 00:22:27.120 --rc geninfo_all_blocks=1 00:22:27.120 --rc geninfo_unexecuted_blocks=1 00:22:27.120 00:22:27.120 ' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:27.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.120 --rc genhtml_branch_coverage=1 00:22:27.120 --rc genhtml_function_coverage=1 00:22:27.120 --rc genhtml_legend=1 00:22:27.120 --rc geninfo_all_blocks=1 00:22:27.120 --rc geninfo_unexecuted_blocks=1 00:22:27.120 00:22:27.120 ' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.120 18:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.691 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:33.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:33.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:33.692 Found net devices under 0000:86:00.0: cvl_0_0 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:33.692 Found net devices under 0000:86:00.1: cvl_0_1 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.692 18:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:22:33.692 00:22:33.692 --- 10.0.0.2 ping statistics --- 00:22:33.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.692 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:33.692 00:22:33.692 --- 10.0.0.1 ping statistics --- 00:22:33.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.692 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3734782 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3734782 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3734782 ']' 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.692 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.693 18:59:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 [2024-11-20 18:59:55.343445] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:22:33.693 [2024-11-20 18:59:55.343489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.693 [2024-11-20 18:59:55.424216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.693 [2024-11-20 18:59:55.463989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.693 [2024-11-20 18:59:55.464029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.693 [2024-11-20 18:59:55.464040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.693 [2024-11-20 18:59:55.464045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.693 [2024-11-20 18:59:55.464050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.693 [2024-11-20 18:59:55.465496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.693 [2024-11-20 18:59:55.465602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.693 [2024-11-20 18:59:55.465710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.693 [2024-11-20 18:59:55.465711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.951 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.951 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:33.951 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:34.209 [2024-11-20 18:59:56.326030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.209 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:34.209 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.209 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.209 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:34.466 Malloc1 00:22:34.466 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.725 18:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:34.725 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.983 [2024-11-20 18:59:57.183190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.983 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:35.241 18:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:35.498 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:35.498 fio-3.35 00:22:35.498 Starting 1 thread 00:22:38.027 00:22:38.027 test: (groupid=0, jobs=1): err= 0: pid=3735383: Wed Nov 20 19:00:00 2024 00:22:38.027 read: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2005msec) 00:22:38.027 slat (nsec): min=1524, max=438326, avg=1896.42, stdev=3962.19 00:22:38.027 clat (usec): min=3672, max=10323, avg=5953.71, stdev=481.23 00:22:38.027 lat (usec): min=3705, max=10325, avg=5955.61, stdev=481.34 00:22:38.027 clat percentiles (usec): 00:22:38.028 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:22:38.028 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:22:38.028 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:22:38.028 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9503], 00:22:38.028 | 99.99th=[10290] 00:22:38.028 bw ( KiB/s): min=46512, max=47848, per=99.95%, avg=47364.00, stdev=614.23, samples=4 00:22:38.028 iops : min=11628, max=11962, avg=11841.00, stdev=153.56, samples=4 00:22:38.028 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(92.3MiB/2005msec); 0 zone resets 00:22:38.028 slat (nsec): min=1568, max=324729, avg=1947.39, stdev=2496.29 00:22:38.028 clat (usec): min=2893, max=9498, avg=4836.76, stdev=396.62 00:22:38.028 lat (usec): min=2908, max=9500, avg=4838.70, stdev=396.82 00:22:38.028 clat percentiles (usec): 00:22:38.028 | 1.00th=[ 3884], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:22:38.028 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:22:38.028 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:38.028 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 7701], 99.95th=[ 8848], 00:22:38.028 | 99.99th=[ 9110] 00:22:38.028 bw ( KiB/s): min=46720, max=47680, per=100.00%, avg=47166.00, stdev=394.98, samples=4 00:22:38.028 iops : min=11680, max=11920, avg=11791.50, stdev=98.74, samples=4 00:22:38.028 lat (msec) : 4=0.76%, 10=99.23%, 20=0.01% 00:22:38.028 cpu : usr=68.36%, sys=27.30%, ctx=301, majf=0, minf=3 00:22:38.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:38.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:38.028 issued rwts: total=23754,23638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:38.028 00:22:38.028 Run status group 0 (all jobs): 00:22:38.028 READ: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2005-2005msec 00:22:38.028 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=92.3MiB (96.8MB), run=2005-2005msec 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:38.028 19:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:38.028 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:38.028 fio-3.35 00:22:38.028 Starting 1 thread 00:22:40.560 00:22:40.560 test: (groupid=0, jobs=1): err= 0: pid=3735988: Wed Nov 20 19:00:02 2024 00:22:40.560 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(337MiB/2003msec) 00:22:40.560 slat (nsec): min=2525, max=97521, avg=2848.63, stdev=1353.43 00:22:40.560 clat (usec): min=1908, max=49435, avg=6985.78, stdev=3375.46 00:22:40.560 lat (usec): min=1911, max=49438, avg=6988.63, stdev=3375.51 00:22:40.560 clat percentiles (usec): 00:22:40.560 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5342], 00:22:40.560 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:22:40.560 | 70.00th=[ 7635], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[ 9634], 00:22:40.560 | 99.00th=[11338], 99.50th=[43254], 99.90th=[47973], 99.95th=[49021], 00:22:40.560 | 99.99th=[49546] 00:22:40.560 bw ( KiB/s): min=76864, max=93184, per=50.18%, avg=86432.00, stdev=7097.80, samples=4 00:22:40.560 iops : min= 4804, max= 5824, avg=5402.00, stdev=443.61, samples=4 00:22:40.560 write: IOPS=6421, BW=100MiB/s (105MB/s)(177MiB/1767msec); 0 zone resets 00:22:40.560 slat (usec): min=29, max=380, avg=31.76, stdev= 7.53 00:22:40.560 clat (usec): min=3121, max=15984, avg=8553.00, stdev=1530.53 00:22:40.560 lat (usec): min=3152, max=16014, avg=8584.76, stdev=1531.93 00:22:40.560 clat percentiles (usec): 00:22:40.560 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7308], 00:22:40.560 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:22:40.560 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:22:40.560 | 99.00th=[13042], 99.50th=[13960], 99.90th=[15270], 99.95th=[15664], 00:22:40.560 | 99.99th=[15926] 00:22:40.560 bw ( KiB/s): min=80256, max=97280, per=87.80%, avg=90216.00, stdev=7189.75, samples=4 00:22:40.560 iops : min= 5016, max= 6080, avg=5638.50, stdev=449.36, samples=4 00:22:40.560 lat (msec) : 2=0.01%, 4=1.70%, 10=90.15%, 20=7.76%, 50=0.39% 00:22:40.560 cpu : usr=84.77%, sys=14.34%, ctx=58, majf=0, minf=3 00:22:40.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:40.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:40.560 issued rwts: total=21561,11347,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:40.560 00:22:40.560 Run status group 0 (all jobs): 00:22:40.560 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=337MiB (353MB), run=2003-2003msec 00:22:40.560 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=177MiB (186MB), run=1767-1767msec 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.560 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.819 rmmod nvme_tcp 00:22:40.819 rmmod nvme_fabrics 00:22:40.819 rmmod nvme_keyring 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3734782 ']' 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3734782 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3734782 ']' 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3734782 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3734782 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3734782' 00:22:40.819 killing process with pid 3734782 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3734782 00:22:40.819 19:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3734782 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.078 19:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.986 19:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.986 00:22:42.986 real 0m16.131s 00:22:42.986 user 0m47.202s 00:22:42.986 sys 0m6.496s 00:22:42.986 19:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.986 19:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.986 ************************************ 00:22:42.987 END TEST nvmf_fio_host 00:22:42.987 ************************************ 00:22:42.987 19:00:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:42.987 19:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:42.987 19:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.987 19:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.246 ************************************ 00:22:43.247 START TEST nvmf_failover 00:22:43.247 ************************************ 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:43.247 * Looking for test storage... 00:22:43.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:43.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.247 --rc genhtml_branch_coverage=1 00:22:43.247 --rc genhtml_function_coverage=1 00:22:43.247 --rc genhtml_legend=1 00:22:43.247 --rc geninfo_all_blocks=1 00:22:43.247 --rc geninfo_unexecuted_blocks=1 00:22:43.247 00:22:43.247 ' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:43.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.247 --rc genhtml_branch_coverage=1 00:22:43.247 --rc genhtml_function_coverage=1 00:22:43.247 --rc genhtml_legend=1 00:22:43.247 --rc geninfo_all_blocks=1 00:22:43.247 --rc geninfo_unexecuted_blocks=1 00:22:43.247 00:22:43.247 ' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:43.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.247 --rc genhtml_branch_coverage=1 00:22:43.247 --rc genhtml_function_coverage=1 00:22:43.247 --rc genhtml_legend=1 00:22:43.247 --rc geninfo_all_blocks=1 00:22:43.247 --rc geninfo_unexecuted_blocks=1 00:22:43.247 00:22:43.247 ' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:43.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.247 --rc genhtml_branch_coverage=1 00:22:43.247 --rc genhtml_function_coverage=1 00:22:43.247 --rc genhtml_legend=1 00:22:43.247 --rc geninfo_all_blocks=1 00:22:43.247 --rc geninfo_unexecuted_blocks=1 00:22:43.247 00:22:43.247 ' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.247 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.248 19:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:49.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:49.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.943 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:49.944 Found net devices under 0000:86:00.0: cvl_0_0 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:49.944 Found net devices under 0000:86:00.1: cvl_0_1 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:22:49.944 00:22:49.944 --- 10.0.0.2 ping statistics --- 00:22:49.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.944 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:49.944 00:22:49.944 --- 10.0.0.1 ping statistics --- 00:22:49.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.944 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3740384 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3740384 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3740384 ']' 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.944 19:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.944 [2024-11-20 19:00:11.526120] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:22:49.944 [2024-11-20 19:00:11.526165] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.944 [2024-11-20 19:00:11.604197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:49.944 [2024-11-20 19:00:11.643441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.944 [2024-11-20 19:00:11.643481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.944 [2024-11-20 19:00:11.643488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.944 [2024-11-20 19:00:11.643494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.944 [2024-11-20 19:00:11.643501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.944 [2024-11-20 19:00:11.644935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.944 [2024-11-20 19:00:11.645040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.944 [2024-11-20 19:00:11.645041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.203 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.203 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:50.203 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.203 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.203 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.203 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.203 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:50.461 [2024-11-20 19:00:12.559942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.461 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:50.461 Malloc0 00:22:50.719 19:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.719 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.978 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.236 [2024-11-20 19:00:13.363093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.236 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:51.236 [2024-11-20 19:00:13.555569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:51.494 [2024-11-20 19:00:13.748215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3740709 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3740709 /var/tmp/bdevperf.sock 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3740709 ']' 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.494 19:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.752 19:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.752 19:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:51.752 19:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:52.010 NVMe0n1 00:22:52.269 19:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:52.527 00:22:52.527 19:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.527 19:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3740940 00:22:52.527 19:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:53.461 19:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.720 [2024-11-20 19:00:15.928294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 [2024-11-20 19:00:15.928731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a2d0 is same with the state(6) to be set 00:22:53.720 19:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:57.003 19:00:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.261 00:22:57.261 19:00:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.261 [2024-11-20 19:00:19.541376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.261 [2024-11-20 19:00:19.541524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 [2024-11-20 19:00:19.541761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88afa0 is same with the state(6) to be set 00:22:57.262 19:00:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:00.543 19:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.543 [2024-11-20 19:00:22.763950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.543 19:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:01.478 19:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:01.736 [2024-11-20 19:00:23.981797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bce0 is same with the state(6) to be set 00:23:01.736 [2024-11-20 19:00:23.981835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bce0 is same with the state(6) to be set 00:23:01.736 [2024-11-20 19:00:23.981848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bce0 is same with the state(6) to be set 00:23:01.736 [2024-11-20 19:00:23.981854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bce0 is same with the state(6) to be set 00:23:01.736 [2024-11-20 19:00:23.981860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bce0 is same with the state(6) to be set 00:23:01.736 [2024-11-20 19:00:23.981867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bce0 is same with the state(6) to be set 00:23:01.736 19:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3740940 00:23:08.316 { 00:23:08.316 "results": [ 00:23:08.316 { 00:23:08.316 "job": "NVMe0n1", 00:23:08.316 "core_mask": "0x1", 00:23:08.316 "workload": "verify", 00:23:08.316 "status": "finished", 00:23:08.316 "verify_range": { 00:23:08.316 "start": 0, 00:23:08.316 "length": 16384 00:23:08.316 }, 00:23:08.316 "queue_depth": 128, 00:23:08.316 "io_size": 4096, 00:23:08.316 "runtime": 15.011151, 00:23:08.316 "iops": 11228.053065351218, 00:23:08.316 "mibps": 43.859582286528195, 00:23:08.316 "io_failed": 9525, 00:23:08.316 "io_timeout": 0, 00:23:08.316 "avg_latency_us": 10768.001219417296, 00:23:08.316 "min_latency_us": 405.69904761904763, 00:23:08.317 "max_latency_us": 22219.82476190476 00:23:08.317 } 00:23:08.317 ], 00:23:08.317 "core_count": 1 00:23:08.317 } 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3740709 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3740709 ']' 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3740709 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3740709 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3740709' 00:23:08.317 killing process with pid 3740709 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3740709 00:23:08.317 19:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3740709 00:23:08.317 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:08.317 [2024-11-20 19:00:13.820883] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:23:08.317 [2024-11-20 19:00:13.820937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740709 ] 00:23:08.317 [2024-11-20 19:00:13.898096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.317 [2024-11-20 19:00:13.939240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.317 Running I/O for 15 seconds... 00:23:08.317 11183.00 IOPS, 43.68 MiB/s [2024-11-20T18:00:30.642Z] [2024-11-20 19:00:15.930149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.317 [2024-11-20 19:00:15.930643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.317 [2024-11-20 19:00:15.930650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.318 [2024-11-20 19:00:15.930665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.930986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.930992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.318 [2024-11-20 19:00:15.931231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.318 [2024-11-20 19:00:15.931239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.319 [2024-11-20 19:00:15.931819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.319 [2024-11-20 19:00:15.931826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.320 [2024-11-20 19:00:15.931833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.931841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.320 [2024-11-20 19:00:15.931848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.931869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.931880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99984 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.931887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.931896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.931902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.931907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99992 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.931920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.931925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.931930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100000 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.931937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.931944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.931949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.931955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100008 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.931961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.931967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.931972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.931977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100016 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.931983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.931991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.931995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100024 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100040 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100048 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100064 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100072 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100080 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.932180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.932185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.932190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100088 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.932196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.942577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.942588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.942595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100096 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.942604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.942611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.320 [2024-11-20 19:00:15.942615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.320 [2024-11-20 19:00:15.942623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100104 len:8 PRP1 0x0 PRP2 0x0 00:23:08.320 [2024-11-20 19:00:15.942629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.942675] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:08.320 [2024-11-20 19:00:15.942698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.320 [2024-11-20 19:00:15.942708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.942716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.320 [2024-11-20 19:00:15.942723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.942730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.320 [2024-11-20 19:00:15.942737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.942744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.320 [2024-11-20 19:00:15.942752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:15.942759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:08.320 [2024-11-20 19:00:15.942797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d5340 (9): Bad file descriptor 00:23:08.320 [2024-11-20 19:00:15.945542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:08.320 [2024-11-20 19:00:15.976641] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:08.320 11046.50 IOPS, 43.15 MiB/s [2024-11-20T18:00:30.645Z] 11191.00 IOPS, 43.71 MiB/s [2024-11-20T18:00:30.645Z] 11256.50 IOPS, 43.97 MiB/s [2024-11-20T18:00:30.645Z] [2024-11-20 19:00:19.541904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.320 [2024-11-20 19:00:19.541939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:19.541953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.320 [2024-11-20 19:00:19.541961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:19.541970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.320 [2024-11-20 19:00:19.541977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.320 [2024-11-20 19:00:19.541986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.320 [2024-11-20 19:00:19.541994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.321 [2024-11-20 19:00:19.542485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.321 [2024-11-20 19:00:19.542493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.322 [2024-11-20 19:00:19.542896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.542912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.542927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.542942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.542958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.542976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.542991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.542999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.543006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.543014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.543020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.543028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.543034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.543043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.543049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.543057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.543064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.322 [2024-11-20 19:00:19.543072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.322 [2024-11-20 19:00:19.543078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.323 [2024-11-20 19:00:19.543608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.323 [2024-11-20 19:00:19.543651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44112 len:8 PRP1 0x0 PRP2 0x0 00:23:08.323 [2024-11-20 19:00:19.543658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.323 [2024-11-20 19:00:19.543673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.323 [2024-11-20 19:00:19.543682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44120 len:8 PRP1 0x0 PRP2 0x0 00:23:08.323 [2024-11-20 19:00:19.543688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.323 [2024-11-20 19:00:19.543695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.323 [2024-11-20 19:00:19.543700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44128 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44136 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44144 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44152 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44160 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44168 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44176 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44184 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44192 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44200 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44208 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.543969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.543974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.543979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44216 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.543986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.555807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.555824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.555832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44224 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.555841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.555850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.324 [2024-11-20 19:00:19.555857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.324 [2024-11-20 19:00:19.555864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43720 len:8 PRP1 0x0 PRP2 0x0 00:23:08.324 [2024-11-20 19:00:19.555873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.555921] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:08.324 [2024-11-20 19:00:19.555948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.324 [2024-11-20 19:00:19.555960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.555971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.324 [2024-11-20 19:00:19.555980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.555990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.324 [2024-11-20 19:00:19.555998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.556008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.324 [2024-11-20 19:00:19.556017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:19.556026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:08.324 [2024-11-20 19:00:19.556063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d5340 (9): Bad file descriptor 00:23:08.324 [2024-11-20 19:00:19.559813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:08.324 [2024-11-20 19:00:19.702534] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:08.324 10918.40 IOPS, 42.65 MiB/s [2024-11-20T18:00:30.649Z] 10994.83 IOPS, 42.95 MiB/s [2024-11-20T18:00:30.649Z] 11080.14 IOPS, 43.28 MiB/s [2024-11-20T18:00:30.649Z] 11150.75 IOPS, 43.56 MiB/s [2024-11-20T18:00:30.649Z] 11176.33 IOPS, 43.66 MiB/s [2024-11-20T18:00:30.649Z] [2024-11-20 19:00:23.982749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.324 [2024-11-20 19:00:23.982923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.324 [2024-11-20 19:00:23.982931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.982940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.982948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.982956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.982964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.982972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.982985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.982993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.325 [2024-11-20 19:00:23.983473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.325 [2024-11-20 19:00:23.983481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.983989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.983996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.984002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.984010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.984018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.984025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.984032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.326 [2024-11-20 19:00:23.984040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.326 [2024-11-20 19:00:23.984048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.327 [2024-11-20 19:00:23.984184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98584 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98592 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98600 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98608 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98616 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98624 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98632 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98640 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98648 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98656 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98664 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98672 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98680 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98688 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98696 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98704 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98712 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.327 [2024-11-20 19:00:23.984629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.327 [2024-11-20 19:00:23.984635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98720 len:8 PRP1 0x0 PRP2 0x0 00:23:08.327 [2024-11-20 19:00:23.984642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.327 [2024-11-20 19:00:23.984648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98728 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98736 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98744 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98752 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98760 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98768 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98776 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98784 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98792 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98800 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98808 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98816 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98824 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.984954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.984960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.984965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.984971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98832 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.995013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.995037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.995044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98840 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.995053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.995068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.995075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98848 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.995084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.328 [2024-11-20 19:00:23.995101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.328 [2024-11-20 19:00:23.995109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98856 len:8 PRP1 0x0 PRP2 0x0 00:23:08.328 [2024-11-20 19:00:23.995118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995166] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:08.328 [2024-11-20 19:00:23.995193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-20 19:00:23.995212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-20 19:00:23.995232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-20 19:00:23.995253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.328 [2024-11-20 19:00:23.995271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.328 [2024-11-20 19:00:23.995280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:08.328 [2024-11-20 19:00:23.995317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d5340 (9): Bad file descriptor 00:23:08.328 [2024-11-20 19:00:23.999061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:08.328 [2024-11-20 19:00:24.021288] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:08.328 11165.90 IOPS, 43.62 MiB/s [2024-11-20T18:00:30.653Z] 11183.09 IOPS, 43.68 MiB/s [2024-11-20T18:00:30.653Z] 11196.17 IOPS, 43.74 MiB/s [2024-11-20T18:00:30.653Z] 11198.69 IOPS, 43.74 MiB/s [2024-11-20T18:00:30.653Z] 11215.79 IOPS, 43.81 MiB/s [2024-11-20T18:00:30.653Z] 11232.60 IOPS, 43.88 MiB/s 00:23:08.328 Latency(us) 00:23:08.328 [2024-11-20T18:00:30.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.328 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:08.328 Verification LBA range: start 0x0 length 0x4000 00:23:08.328 NVMe0n1 : 15.01 11228.05 43.86 634.53 0.00 10768.00 405.70 22219.82 00:23:08.328 [2024-11-20T18:00:30.653Z] =================================================================================================================== 00:23:08.328 [2024-11-20T18:00:30.653Z] Total : 11228.05 43.86 634.53 0.00 10768.00 405.70 22219.82 00:23:08.328 Received shutdown signal, test time was about 15.000000 seconds 00:23:08.328 00:23:08.329 Latency(us) 00:23:08.329 [2024-11-20T18:00:30.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.329 [2024-11-20T18:00:30.654Z] =================================================================================================================== 00:23:08.329 [2024-11-20T18:00:30.654Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3743463 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3743463 /var/tmp/bdevperf.sock 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3743463 ']' 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:08.329 [2024-11-20 19:00:30.561497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:08.329 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:08.588 [2024-11-20 19:00:30.753985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:08.588 19:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:08.847 NVMe0n1 00:23:08.847 19:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:09.105 00:23:09.105 19:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:09.364 00:23:09.364 19:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:09.364 19:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:09.623 19:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:09.881 19:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:13.169 19:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:13.169 19:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:13.169 19:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.169 19:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3744176 00:23:13.169 19:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3744176 00:23:14.107 { 00:23:14.107 "results": [ 00:23:14.107 { 00:23:14.107 "job": "NVMe0n1", 00:23:14.107 "core_mask": "0x1", 00:23:14.107 "workload": "verify", 00:23:14.107 "status": "finished", 00:23:14.107 "verify_range": { 00:23:14.107 "start": 0, 00:23:14.107 "length": 16384 00:23:14.107 }, 00:23:14.107 "queue_depth": 128, 00:23:14.107 "io_size": 4096, 00:23:14.107 "runtime": 1.003519, 00:23:14.107 "iops": 11345.076675180042, 00:23:14.107 "mibps": 44.31670576242204, 00:23:14.107 "io_failed": 0, 00:23:14.107 "io_timeout": 0, 00:23:14.107 "avg_latency_us": 11242.257458560764, 00:23:14.107 "min_latency_us": 975.2380952380952, 00:23:14.107 "max_latency_us": 8987.794285714286 00:23:14.107 } 00:23:14.107 ], 00:23:14.107 "core_count": 1 00:23:14.107 } 00:23:14.107 19:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:14.107 [2024-11-20 19:00:30.170831] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:23:14.107 [2024-11-20 19:00:30.170888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743463 ] 00:23:14.107 [2024-11-20 19:00:30.247809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.107 [2024-11-20 19:00:30.285342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.107 [2024-11-20 19:00:32.050110] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:14.107 [2024-11-20 19:00:32.050154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.107 [2024-11-20 19:00:32.050165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.107 [2024-11-20 19:00:32.050174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.107 [2024-11-20 19:00:32.050180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.107 [2024-11-20 19:00:32.050188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.107 [2024-11-20 19:00:32.050195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.107 [2024-11-20 19:00:32.050208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.107 [2024-11-20 19:00:32.050215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.107 [2024-11-20 19:00:32.050222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:14.107 [2024-11-20 19:00:32.050246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:14.107 [2024-11-20 19:00:32.050259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1241340 (9): Bad file descriptor 00:23:14.107 [2024-11-20 19:00:32.060785] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:14.107 Running I/O for 1 seconds... 00:23:14.107 11257.00 IOPS, 43.97 MiB/s 00:23:14.107 Latency(us) 00:23:14.107 [2024-11-20T18:00:36.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.107 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:14.107 Verification LBA range: start 0x0 length 0x4000 00:23:14.107 NVMe0n1 : 1.00 11345.08 44.32 0.00 0.00 11242.26 975.24 8987.79 00:23:14.107 [2024-11-20T18:00:36.432Z] =================================================================================================================== 00:23:14.107 [2024-11-20T18:00:36.432Z] Total : 11345.08 44.32 0.00 0.00 11242.26 975.24 8987.79 00:23:14.107 19:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.107 19:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:14.365 19:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:14.623 19:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.623 19:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:14.882 19:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.142 19:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3743463 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3743463 ']' 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3743463 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3743463 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3743463' 00:23:18.432 killing process with pid 3743463 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3743463 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3743463 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:18.432 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.692 rmmod nvme_tcp 00:23:18.692 rmmod nvme_fabrics 00:23:18.692 rmmod nvme_keyring 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3740384 ']' 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3740384 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3740384 ']' 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3740384 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3740384 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3740384' 00:23:18.692 killing process with pid 3740384 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3740384 00:23:18.692 19:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3740384 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.952 19:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.488 00:23:21.488 real 0m37.888s 00:23:21.488 user 1m59.791s 00:23:21.488 sys 0m8.024s 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:21.488 ************************************ 00:23:21.488 END TEST nvmf_failover 00:23:21.488 ************************************ 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.488 ************************************ 00:23:21.488 START TEST nvmf_host_discovery 00:23:21.488 ************************************ 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:21.488 * Looking for test storage... 00:23:21.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.488 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.489 --rc genhtml_branch_coverage=1 00:23:21.489 --rc genhtml_function_coverage=1 00:23:21.489 --rc genhtml_legend=1 00:23:21.489 --rc geninfo_all_blocks=1 00:23:21.489 --rc geninfo_unexecuted_blocks=1 00:23:21.489 00:23:21.489 ' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.489 --rc genhtml_branch_coverage=1 00:23:21.489 --rc genhtml_function_coverage=1 00:23:21.489 --rc genhtml_legend=1 00:23:21.489 --rc geninfo_all_blocks=1 00:23:21.489 --rc geninfo_unexecuted_blocks=1 00:23:21.489 00:23:21.489 ' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.489 --rc genhtml_branch_coverage=1 00:23:21.489 --rc genhtml_function_coverage=1 00:23:21.489 --rc genhtml_legend=1 00:23:21.489 --rc geninfo_all_blocks=1 00:23:21.489 --rc geninfo_unexecuted_blocks=1 00:23:21.489 00:23:21.489 ' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.489 --rc genhtml_branch_coverage=1 00:23:21.489 --rc genhtml_function_coverage=1 00:23:21.489 --rc genhtml_legend=1 00:23:21.489 --rc geninfo_all_blocks=1 00:23:21.489 --rc geninfo_unexecuted_blocks=1 00:23:21.489 00:23:21.489 ' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.489 19:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:28.060 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:28.060 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:28.060 Found net devices under 0000:86:00.0: cvl_0_0 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.060 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:28.061 Found net devices under 0000:86:00.1: cvl_0_1 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:23:28.061 00:23:28.061 --- 10.0.0.2 ping statistics --- 00:23:28.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.061 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:23:28.061 00:23:28.061 --- 10.0.0.1 ping statistics --- 00:23:28.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.061 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3748616 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3748616 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3748616 ']' 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.061 [2024-11-20 19:00:49.499825] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:23:28.061 [2024-11-20 19:00:49.499877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.061 [2024-11-20 19:00:49.580669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.061 [2024-11-20 19:00:49.621561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.061 [2024-11-20 19:00:49.621600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.061 [2024-11-20 19:00:49.621607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.061 [2024-11-20 19:00:49.621613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.061 [2024-11-20 19:00:49.621618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.061 [2024-11-20 19:00:49.622174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.061 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 [2024-11-20 19:00:49.754089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 [2024-11-20 19:00:49.766303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 null0 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 null1 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3748750 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3748750 /tmp/host.sock 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3748750 ']' 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:28.062 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.062 19:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 [2024-11-20 19:00:49.847368] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:23:28.062 [2024-11-20 19:00:49.847411] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748750 ] 00:23:28.062 [2024-11-20 19:00:49.922992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.062 [2024-11-20 19:00:49.964708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.062 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.063 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.321 [2024-11-20 19:00:50.391860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:28.321 19:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:28.887 [2024-11-20 19:00:51.130369] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:28.887 [2024-11-20 19:00:51.130389] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:28.887 [2024-11-20 19:00:51.130400] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.145 [2024-11-20 19:00:51.216661] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:29.145 [2024-11-20 19:00:51.431872] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:29.145 [2024-11-20 19:00:51.432647] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2381df0:1 started. 00:23:29.145 [2024-11-20 19:00:51.434047] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.145 [2024-11-20 19:00:51.434063] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.403 [2024-11-20 19:00:51.480877] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2381df0 was disconnected and freed. delete nvme_qpair. 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:29.403 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.404 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.404 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.662 [2024-11-20 19:00:51.774179] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2350620:1 started. 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.662 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:29.663 [2024-11-20 19:00:51.780432] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2350620 was disconnected and freed. delete nvme_qpair. 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.663 [2024-11-20 19:00:51.875999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.663 [2024-11-20 19:00:51.876559] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:29.663 [2024-11-20 19:00:51.876578] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.663 [2024-11-20 19:00:51.963832] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.663 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.922 19:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.922 19:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:29.922 19:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:29.922 [2024-11-20 19:00:52.226131] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:29.922 [2024-11-20 19:00:52.226164] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.922 [2024-11-20 19:00:52.226172] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.922 [2024-11-20 19:00:52.226177] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.857 [2024-11-20 19:00:53.119598] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:30.857 [2024-11-20 19:00:53.119619] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.857 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:30.857 [2024-11-20 19:00:53.126146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.857 [2024-11-20 19:00:53.126164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.857 [2024-11-20 19:00:53.126173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.858 [2024-11-20 19:00:53.126180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.858 [2024-11-20 19:00:53.126188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.858 [2024-11-20 19:00:53.126195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.858 [2024-11-20 19:00:53.126205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.858 [2024-11-20 19:00:53.126212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.858 [2024-11-20 19:00:53.126218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:30.858 [2024-11-20 19:00:53.136158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.858 [2024-11-20 19:00:53.146192] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:30.858 [2024-11-20 19:00:53.146206] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:30.858 [2024-11-20 19:00:53.146214] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:30.858 [2024-11-20 19:00:53.146219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:30.858 [2024-11-20 19:00:53.146235] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.858 [2024-11-20 19:00:53.146483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.858 [2024-11-20 19:00:53.146499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:30.858 [2024-11-20 19:00:53.146507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:30.858 [2024-11-20 19:00:53.146519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:30.858 [2024-11-20 19:00:53.146529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.858 [2024-11-20 19:00:53.146536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.858 [2024-11-20 19:00:53.146543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.858 [2024-11-20 19:00:53.146549] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.858 [2024-11-20 19:00:53.146554] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.858 [2024-11-20 19:00:53.146558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.858 [2024-11-20 19:00:53.156266] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:30.858 [2024-11-20 19:00:53.156277] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:30.858 [2024-11-20 19:00:53.156281] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:30.858 [2024-11-20 19:00:53.156285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:30.858 [2024-11-20 19:00:53.156300] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.858 [2024-11-20 19:00:53.156533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.858 [2024-11-20 19:00:53.156547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:30.858 [2024-11-20 19:00:53.156555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:30.858 [2024-11-20 19:00:53.156567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:30.858 [2024-11-20 19:00:53.156577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.858 [2024-11-20 19:00:53.156583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.858 [2024-11-20 19:00:53.156590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.858 [2024-11-20 19:00:53.156596] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.858 [2024-11-20 19:00:53.156600] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.858 [2024-11-20 19:00:53.156604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.858 [2024-11-20 19:00:53.166332] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:30.858 [2024-11-20 19:00:53.166347] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:30.858 [2024-11-20 19:00:53.166351] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:30.858 [2024-11-20 19:00:53.166355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:30.858 [2024-11-20 19:00:53.166370] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.858 [2024-11-20 19:00:53.166535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.858 [2024-11-20 19:00:53.166548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:30.858 [2024-11-20 19:00:53.166556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:30.858 [2024-11-20 19:00:53.166566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:30.858 [2024-11-20 19:00:53.166577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.858 [2024-11-20 19:00:53.166584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.858 [2024-11-20 19:00:53.166591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.858 [2024-11-20 19:00:53.166597] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.858 [2024-11-20 19:00:53.166601] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.858 [2024-11-20 19:00:53.166605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.858 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:30.858 [2024-11-20 19:00:53.176401] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:30.858 [2024-11-20 19:00:53.176413] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:30.858 [2024-11-20 19:00:53.176418] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:30.858 [2024-11-20 19:00:53.176421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:30.859 [2024-11-20 19:00:53.176435] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.859 [2024-11-20 19:00:53.176616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.859 [2024-11-20 19:00:53.176630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:30.859 [2024-11-20 19:00:53.176637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:30.859 [2024-11-20 19:00:53.176648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:30.859 [2024-11-20 19:00:53.176658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.859 [2024-11-20 19:00:53.176667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.859 [2024-11-20 19:00:53.176674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.859 [2024-11-20 19:00:53.176679] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.859 [2024-11-20 19:00:53.176683] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.859 [2024-11-20 19:00:53.176687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.859 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:30.859 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.859 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.859 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.859 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.859 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.118 [2024-11-20 19:00:53.186467] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:31.118 [2024-11-20 19:00:53.186480] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:31.118 [2024-11-20 19:00:53.186484] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.186488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:31.119 [2024-11-20 19:00:53.186503] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.186664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.119 [2024-11-20 19:00:53.186678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:31.119 [2024-11-20 19:00:53.186685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:31.119 [2024-11-20 19:00:53.186696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:31.119 [2024-11-20 19:00:53.186706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:31.119 [2024-11-20 19:00:53.186713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:31.119 [2024-11-20 19:00:53.186720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:31.119 [2024-11-20 19:00:53.186726] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:31.119 [2024-11-20 19:00:53.186730] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:31.119 [2024-11-20 19:00:53.186734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:31.119 [2024-11-20 19:00:53.196535] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:31.119 [2024-11-20 19:00:53.196545] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:31.119 [2024-11-20 19:00:53.196549] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.196553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:31.119 [2024-11-20 19:00:53.196570] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.196796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.119 [2024-11-20 19:00:53.196809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:31.119 [2024-11-20 19:00:53.196817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:31.119 [2024-11-20 19:00:53.196827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:31.119 [2024-11-20 19:00:53.196837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:31.119 [2024-11-20 19:00:53.196843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:31.119 [2024-11-20 19:00:53.196850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:31.119 [2024-11-20 19:00:53.196856] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:31.119 [2024-11-20 19:00:53.196861] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:31.119 [2024-11-20 19:00:53.196864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:31.119 [2024-11-20 19:00:53.206601] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:31.119 [2024-11-20 19:00:53.206614] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:31.119 [2024-11-20 19:00:53.206618] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.206622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:31.119 [2024-11-20 19:00:53.206636] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.206838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.119 [2024-11-20 19:00:53.206853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:31.119 [2024-11-20 19:00:53.206860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:31.119 [2024-11-20 19:00:53.206871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:31.119 [2024-11-20 19:00:53.206881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:31.119 [2024-11-20 19:00:53.206888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:31.119 [2024-11-20 19:00:53.206895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:31.119 [2024-11-20 19:00:53.206901] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:31.119 [2024-11-20 19:00:53.206905] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:31.119 [2024-11-20 19:00:53.206909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.119 [2024-11-20 19:00:53.216666] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:31.119 [2024-11-20 19:00:53.216677] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:31.119 [2024-11-20 19:00:53.216687] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.216691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:31.119 [2024-11-20 19:00:53.216704] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:31.119 [2024-11-20 19:00:53.216953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.119 [2024-11-20 19:00:53.216966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:31.119 [2024-11-20 19:00:53.216975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:31.119 [2024-11-20 19:00:53.216984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:31.119 [2024-11-20 19:00:53.216994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:31.119 [2024-11-20 19:00:53.217000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:31.119 [2024-11-20 19:00:53.217007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:31.119 [2024-11-20 19:00:53.217013] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:31.119 [2024-11-20 19:00:53.217017] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:31.119 [2024-11-20 19:00:53.217021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:31.119 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:31.119 [2024-11-20 19:00:53.226736] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:31.119 [2024-11-20 19:00:53.226748] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:31.119 [2024-11-20 19:00:53.226752] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:31.120 [2024-11-20 19:00:53.226756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:31.120 [2024-11-20 19:00:53.226770] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:31.120 [2024-11-20 19:00:53.226920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.120 [2024-11-20 19:00:53.226932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:31.120 [2024-11-20 19:00:53.226940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:31.120 [2024-11-20 19:00:53.226949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:31.120 [2024-11-20 19:00:53.226959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:31.120 [2024-11-20 19:00:53.226969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:31.120 [2024-11-20 19:00:53.226975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:31.120 [2024-11-20 19:00:53.226981] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:31.120 [2024-11-20 19:00:53.226986] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:31.120 [2024-11-20 19:00:53.226990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:31.120 [2024-11-20 19:00:53.236801] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:31.120 [2024-11-20 19:00:53.236816] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:31.120 [2024-11-20 19:00:53.236821] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:31.120 [2024-11-20 19:00:53.236825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:31.120 [2024-11-20 19:00:53.236840] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:31.120 [2024-11-20 19:00:53.237012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.120 [2024-11-20 19:00:53.237025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:31.120 [2024-11-20 19:00:53.237033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:31.120 [2024-11-20 19:00:53.237044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:31.120 [2024-11-20 19:00:53.237054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:31.120 [2024-11-20 19:00:53.237061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:31.120 [2024-11-20 19:00:53.237068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:31.120 [2024-11-20 19:00:53.237073] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:31.120 [2024-11-20 19:00:53.237077] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:31.120 [2024-11-20 19:00:53.237081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.120 [2024-11-20 19:00:53.246870] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:31.120 [2024-11-20 19:00:53.246881] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:31.120 [2024-11-20 19:00:53.246885] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:31.120 [2024-11-20 19:00:53.246892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:31.120 [2024-11-20 19:00:53.246906] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:31.120 [2024-11-20 19:00:53.247126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.120 [2024-11-20 19:00:53.247139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352390 with addr=10.0.0.2, port=4420 00:23:31.120 [2024-11-20 19:00:53.247147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352390 is same with the state(6) to be set 00:23:31.120 [2024-11-20 19:00:53.247158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352390 (9): Bad file descriptor 00:23:31.120 [2024-11-20 19:00:53.247168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:31.120 [2024-11-20 19:00:53.247174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:31.120 [2024-11-20 19:00:53.247181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:31.120 [2024-11-20 19:00:53.247187] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:31.120 [2024-11-20 19:00:53.247191] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:31.120 [2024-11-20 19:00:53.247195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:31.120 [2024-11-20 19:00:53.247391] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:31.120 [2024-11-20 19:00:53.247405] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:31.120 19:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:32.055 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.055 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:32.055 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:32.055 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:32.055 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:32.055 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.055 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.056 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.314 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:32.315 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:32.315 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:32.315 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.315 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:32.315 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.315 19:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.248 [2024-11-20 19:00:55.562354] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:33.248 [2024-11-20 19:00:55.562370] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:33.248 [2024-11-20 19:00:55.562382] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.506 [2024-11-20 19:00:55.648650] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:33.764 [2024-11-20 19:00:55.908877] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:33.764 [2024-11-20 19:00:55.909456] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x24b9e10:1 started. 00:23:33.764 [2024-11-20 19:00:55.911015] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:33.764 [2024-11-20 19:00:55.911039] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.764 request: 00:23:33.764 { 00:23:33.764 "name": "nvme", 00:23:33.764 "trtype": "tcp", 00:23:33.764 "traddr": "10.0.0.2", 00:23:33.764 "adrfam": "ipv4", 00:23:33.764 "trsvcid": "8009", 00:23:33.764 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:33.764 "wait_for_attach": true, 00:23:33.764 "method": "bdev_nvme_start_discovery", 00:23:33.764 "req_id": 1 00:23:33.764 } 00:23:33.764 Got JSON-RPC error response 00:23:33.764 response: 00:23:33.764 { 00:23:33.764 "code": -17, 00:23:33.764 "message": "File exists" 00:23:33.764 } 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.764 [2024-11-20 19:00:55.952772] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x24b9e10 was disconnected and freed. delete nvme_qpair. 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:33.764 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.765 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.765 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.765 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.765 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.765 19:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.765 request: 00:23:33.765 { 00:23:33.765 "name": "nvme_second", 00:23:33.765 "trtype": "tcp", 00:23:33.765 "traddr": "10.0.0.2", 00:23:33.765 "adrfam": "ipv4", 00:23:33.765 "trsvcid": "8009", 00:23:33.765 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:33.765 "wait_for_attach": true, 00:23:33.765 "method": "bdev_nvme_start_discovery", 00:23:33.765 "req_id": 1 00:23:33.765 } 00:23:33.765 Got JSON-RPC error response 00:23:33.765 response: 00:23:33.765 { 00:23:33.765 "code": -17, 00:23:33.765 "message": "File exists" 00:23:33.765 } 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:33.765 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.023 19:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.958 [2024-11-20 19:00:57.154471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-20 19:00:57.154497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234edc0 with addr=10.0.0.2, port=8010 00:23:34.958 [2024-11-20 19:00:57.154510] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:34.958 [2024-11-20 19:00:57.154516] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:34.958 [2024-11-20 19:00:57.154522] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:35.891 [2024-11-20 19:00:58.156823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.891 [2024-11-20 19:00:58.156847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236c210 with addr=10.0.0.2, port=8010 00:23:35.891 [2024-11-20 19:00:58.156858] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:35.891 [2024-11-20 19:00:58.156864] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:35.891 [2024-11-20 19:00:58.156870] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:37.267 [2024-11-20 19:00:59.159120] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:37.267 request: 00:23:37.267 { 00:23:37.267 "name": "nvme_second", 00:23:37.267 "trtype": "tcp", 00:23:37.267 "traddr": "10.0.0.2", 00:23:37.267 "adrfam": "ipv4", 00:23:37.267 "trsvcid": "8010", 00:23:37.267 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:37.267 "wait_for_attach": false, 00:23:37.267 "attach_timeout_ms": 3000, 00:23:37.267 "method": "bdev_nvme_start_discovery", 00:23:37.267 "req_id": 1 00:23:37.267 } 00:23:37.267 Got JSON-RPC error response 00:23:37.267 response: 00:23:37.267 { 00:23:37.267 "code": -110, 00:23:37.267 "message": "Connection timed out" 00:23:37.267 } 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3748750 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.267 rmmod nvme_tcp 00:23:37.267 rmmod nvme_fabrics 00:23:37.267 rmmod nvme_keyring 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3748616 ']' 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3748616 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3748616 ']' 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3748616 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3748616 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3748616' 00:23:37.267 killing process with pid 3748616 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3748616 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3748616 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.267 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.268 19:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.805 00:23:39.805 real 0m18.259s 00:23:39.805 user 0m22.549s 00:23:39.805 sys 0m5.910s 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.805 ************************************ 00:23:39.805 END TEST nvmf_host_discovery 00:23:39.805 ************************************ 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.805 ************************************ 00:23:39.805 START TEST nvmf_host_multipath_status 00:23:39.805 ************************************ 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:39.805 * Looking for test storage... 00:23:39.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:39.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.805 --rc genhtml_branch_coverage=1 00:23:39.805 --rc genhtml_function_coverage=1 00:23:39.805 --rc genhtml_legend=1 00:23:39.805 --rc geninfo_all_blocks=1 00:23:39.805 --rc geninfo_unexecuted_blocks=1 00:23:39.805 00:23:39.805 ' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:39.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.805 --rc genhtml_branch_coverage=1 00:23:39.805 --rc genhtml_function_coverage=1 00:23:39.805 --rc genhtml_legend=1 00:23:39.805 --rc geninfo_all_blocks=1 00:23:39.805 --rc geninfo_unexecuted_blocks=1 00:23:39.805 00:23:39.805 ' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:39.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.805 --rc genhtml_branch_coverage=1 00:23:39.805 --rc genhtml_function_coverage=1 00:23:39.805 --rc genhtml_legend=1 00:23:39.805 --rc geninfo_all_blocks=1 00:23:39.805 --rc geninfo_unexecuted_blocks=1 00:23:39.805 00:23:39.805 ' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:39.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.805 --rc genhtml_branch_coverage=1 00:23:39.805 --rc genhtml_function_coverage=1 00:23:39.805 --rc genhtml_legend=1 00:23:39.805 --rc geninfo_all_blocks=1 00:23:39.805 --rc geninfo_unexecuted_blocks=1 00:23:39.805 00:23:39.805 ' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.805 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.806 19:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:46.416 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:46.416 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:46.416 Found net devices under 0000:86:00.0: cvl_0_0 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:46.416 Found net devices under 0000:86:00.1: cvl_0_1 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.416 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:23:46.417 00:23:46.417 --- 10.0.0.2 ping statistics --- 00:23:46.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.417 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:23:46.417 00:23:46.417 --- 10.0.0.1 ping statistics --- 00:23:46.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.417 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3753946 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3753946 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3753946 ']' 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.417 19:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.417 [2024-11-20 19:01:07.844915] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:23:46.417 [2024-11-20 19:01:07.844963] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.417 [2024-11-20 19:01:07.923844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:46.417 [2024-11-20 19:01:07.964330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.417 [2024-11-20 19:01:07.964367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.417 [2024-11-20 19:01:07.964374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.417 [2024-11-20 19:01:07.964381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.417 [2024-11-20 19:01:07.964386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.417 [2024-11-20 19:01:07.965543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.417 [2024-11-20 19:01:07.965546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3753946 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:46.417 [2024-11-20 19:01:08.273830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:46.417 Malloc0 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:46.417 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:46.676 19:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.934 [2024-11-20 19:01:09.102949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.934 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.192 [2024-11-20 19:01:09.295425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3754202 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3754202 /var/tmp/bdevperf.sock 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3754202 ']' 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.192 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:47.449 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.449 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:47.449 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:47.706 19:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:47.964 Nvme0n1 00:23:48.223 19:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:48.481 Nvme0n1 00:23:48.481 19:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:48.481 19:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:51.006 19:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:51.006 19:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:51.006 19:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:51.006 19:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:51.937 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:51.937 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.937 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.937 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.194 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.194 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:52.194 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.194 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.450 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.450 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.450 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.450 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.706 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.707 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.707 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.707 19:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.707 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.707 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.707 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.707 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.963 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.963 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.963 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.963 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.220 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.220 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:53.220 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:53.477 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:53.734 19:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:54.667 19:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:54.667 19:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.667 19:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.667 19:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.926 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.926 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.926 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.926 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.183 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.442 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.442 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:55.442 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.442 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.700 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.700 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.700 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.700 19:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.957 19:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.957 19:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:55.957 19:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:56.215 19:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:56.215 19:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:57.588 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:57.588 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:57.589 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.589 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.589 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.589 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:57.589 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.589 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.847 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.847 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.847 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.847 19:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.847 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.847 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.847 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.847 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.105 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.105 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.105 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.105 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.364 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.364 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.364 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.364 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.622 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.622 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:58.622 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.880 19:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:58.880 19:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.265 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.524 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.782 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.782 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.782 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.782 19:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.041 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.041 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:01.041 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.041 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.300 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.300 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:01.300 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:01.300 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:01.557 19:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:02.491 19:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:02.491 19:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:02.491 19:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.491 19:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.749 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.749 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:02.749 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.749 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.006 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.006 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.006 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.006 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.265 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.265 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.265 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.265 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.522 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.522 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:03.522 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.522 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.522 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.522 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:03.522 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.523 19:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.780 19:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.780 19:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:03.780 19:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:04.039 19:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.297 19:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:05.231 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:05.231 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:05.231 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.231 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.489 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.489 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:05.489 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.489 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.748 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.748 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.748 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.748 19:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.748 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.748 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.748 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.748 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.006 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.006 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:06.007 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.007 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.265 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.265 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.265 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.265 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.523 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.523 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:06.781 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:06.781 19:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:06.781 19:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.052 19:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:07.989 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:07.989 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.248 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.248 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.248 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.248 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:08.248 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.248 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.507 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.507 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.507 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.507 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.765 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.765 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.765 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.765 19:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.024 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.024 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.024 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.024 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.282 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.282 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.282 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.282 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.282 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.282 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:09.282 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.540 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.799 19:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:10.734 19:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:10.734 19:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.734 19:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.734 19:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.992 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.992 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:10.992 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.992 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.250 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.250 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.250 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.250 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.508 19:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.765 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.765 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:11.765 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.765 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.023 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.023 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:12.023 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:12.281 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:12.538 19:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:13.472 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:13.472 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.472 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.472 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.730 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.730 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.730 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.730 19:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.988 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.988 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.988 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.988 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.246 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.504 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.504 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.504 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.504 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.761 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.761 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:14.761 19:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.018 19:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:15.276 19:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:16.220 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:16.220 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.220 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.220 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.479 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.737 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.737 19:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.737 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.737 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.995 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.995 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.995 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.995 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.252 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.252 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:17.252 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.252 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.510 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.510 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3754202 00:24:17.510 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3754202 ']' 00:24:17.510 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3754202 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3754202 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3754202' 00:24:17.511 killing process with pid 3754202 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3754202 00:24:17.511 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3754202 00:24:17.511 { 00:24:17.511 "results": [ 00:24:17.511 { 00:24:17.511 "job": "Nvme0n1", 00:24:17.511 "core_mask": "0x4", 00:24:17.511 "workload": "verify", 00:24:17.511 "status": "terminated", 00:24:17.511 "verify_range": { 00:24:17.511 "start": 0, 00:24:17.511 "length": 16384 00:24:17.511 }, 00:24:17.511 "queue_depth": 128, 00:24:17.511 "io_size": 4096, 00:24:17.511 "runtime": 28.785885, 00:24:17.511 "iops": 10634.378619938208, 00:24:17.511 "mibps": 41.54054148413363, 00:24:17.511 "io_failed": 0, 00:24:17.511 "io_timeout": 0, 00:24:17.511 "avg_latency_us": 12016.149825938162, 00:24:17.511 "min_latency_us": 292.57142857142856, 00:24:17.511 "max_latency_us": 3019898.88 00:24:17.511 } 00:24:17.511 ], 00:24:17.511 "core_count": 1 00:24:17.511 } 00:24:17.783 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3754202 00:24:17.783 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.783 [2024-11-20 19:01:09.368941] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:24:17.783 [2024-11-20 19:01:09.368994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754202 ] 00:24:17.783 [2024-11-20 19:01:09.444536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.783 [2024-11-20 19:01:09.485215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.783 Running I/O for 90 seconds... 00:24:17.783 11341.00 IOPS, 44.30 MiB/s [2024-11-20T18:01:40.108Z] 11451.50 IOPS, 44.73 MiB/s [2024-11-20T18:01:40.108Z] 11469.33 IOPS, 44.80 MiB/s [2024-11-20T18:01:40.108Z] 11499.00 IOPS, 44.92 MiB/s [2024-11-20T18:01:40.108Z] 11522.80 IOPS, 45.01 MiB/s [2024-11-20T18:01:40.108Z] 11516.50 IOPS, 44.99 MiB/s [2024-11-20T18:01:40.108Z] 11523.57 IOPS, 45.01 MiB/s [2024-11-20T18:01:40.108Z] 11515.12 IOPS, 44.98 MiB/s [2024-11-20T18:01:40.108Z] 11499.00 IOPS, 44.92 MiB/s [2024-11-20T18:01:40.108Z] 11484.80 IOPS, 44.86 MiB/s [2024-11-20T18:01:40.108Z] 11486.36 IOPS, 44.87 MiB/s [2024-11-20T18:01:40.108Z] 11478.42 IOPS, 44.84 MiB/s [2024-11-20T18:01:40.108Z] [2024-11-20 19:01:23.591619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.783 [2024-11-20 19:01:23.591659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.783 [2024-11-20 19:01:23.591697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.783 [2024-11-20 19:01:23.591707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.783 [2024-11-20 19:01:23.591720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.783 [2024-11-20 19:01:23.591727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.783 [2024-11-20 19:01:23.591740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.783 [2024-11-20 19:01:23.591748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.783 [2024-11-20 19:01:23.591760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.591980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.591993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.784 [2024-11-20 19:01:23.592805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.592955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.592962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.593968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.593989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.784 [2024-11-20 19:01:23.594429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.784 [2024-11-20 19:01:23.594437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.594977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.594985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.595003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.595010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.595028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.595036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.595054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.595061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:23.595079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:23.595086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.785 11239.77 IOPS, 43.91 MiB/s [2024-11-20T18:01:40.110Z] 10436.93 IOPS, 40.77 MiB/s [2024-11-20T18:01:40.110Z] 9741.13 IOPS, 38.05 MiB/s [2024-11-20T18:01:40.110Z] 9319.12 IOPS, 36.40 MiB/s [2024-11-20T18:01:40.110Z] 9449.24 IOPS, 36.91 MiB/s [2024-11-20T18:01:40.110Z] 9567.00 IOPS, 37.37 MiB/s [2024-11-20T18:01:40.110Z] 9743.89 IOPS, 38.06 MiB/s [2024-11-20T18:01:40.110Z] 9937.40 IOPS, 38.82 MiB/s [2024-11-20T18:01:40.110Z] 10117.95 IOPS, 39.52 MiB/s [2024-11-20T18:01:40.110Z] 10170.82 IOPS, 39.73 MiB/s [2024-11-20T18:01:40.110Z] 10220.83 IOPS, 39.93 MiB/s [2024-11-20T18:01:40.110Z] 10278.50 IOPS, 40.15 MiB/s [2024-11-20T18:01:40.110Z] 10410.24 IOPS, 40.66 MiB/s [2024-11-20T18:01:40.110Z] 10521.35 IOPS, 41.10 MiB/s [2024-11-20T18:01:40.110Z] [2024-11-20 19:01:37.328112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.785 [2024-11-20 19:01:37.328409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.785 [2024-11-20 19:01:37.328422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.786 [2024-11-20 19:01:37.328584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.328981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.328988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.786 [2024-11-20 19:01:37.329184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.786 [2024-11-20 19:01:37.329196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.329207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.329222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.329229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.329666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.329679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.329696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.329703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.329716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.329723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.329735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.329743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.329755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.329762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.329774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.329781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.787 [2024-11-20 19:01:37.331897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.331917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.331937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.331956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.331977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.331990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.331997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.787 [2024-11-20 19:01:37.332218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.787 [2024-11-20 19:01:37.332231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.788 [2024-11-20 19:01:37.332876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.332985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.332992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.788 [2024-11-20 19:01:37.333791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.788 [2024-11-20 19:01:37.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.333814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.333834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.333853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.333877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.333897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.333916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.333941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.333960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.333981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.333994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.334173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.334179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.335739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.335761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.789 [2024-11-20 19:01:37.335965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.335984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.335996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.336003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.336015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.336023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.336035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.336042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.336054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.336060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.336073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.336081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.336093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.336100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.789 [2024-11-20 19:01:37.336117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.789 [2024-11-20 19:01:37.336125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.336147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.336166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.336186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.336211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.336230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.336250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.336269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.336287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.336307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.336320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.336327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.337097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.337114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.337129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.337137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.337150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.337157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.337172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.337179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.337192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.337199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.338918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.338990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.790 [2024-11-20 19:01:37.338997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.339009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.790 [2024-11-20 19:01:37.339017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.790 [2024-11-20 19:01:37.339029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.339136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.339156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.339176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.339195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.339221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.339474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.339480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.340415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.791 [2024-11-20 19:01:37.340424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.341286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.791 [2024-11-20 19:01:37.341303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.791 [2024-11-20 19:01:37.341317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.341448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.341468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.341487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.341510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.341530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.341555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.341630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.341750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.341761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.342760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.342782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.342806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.342826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.342845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.342865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.342884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.342904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.792 [2024-11-20 19:01:37.342925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.342945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.342964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.342984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.342996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.343004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.343016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.343023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.792 [2024-11-20 19:01:37.343040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.792 [2024-11-20 19:01:37.343048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.343667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.343838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.343851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.350821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.350855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.350876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.350897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.350922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.350942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.350962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.350982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.350995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.351004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.793 [2024-11-20 19:01:37.352455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.352482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.793 [2024-11-20 19:01:37.352499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.793 [2024-11-20 19:01:37.352508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.352959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.352977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.352988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.353005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.353014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.353031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.353041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.353058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.353067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.353084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.353096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.353112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.353122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.353139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.353149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.353166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.353177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.355510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.355541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.794 [2024-11-20 19:01:37.355846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.794 [2024-11-20 19:01:37.355863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.794 [2024-11-20 19:01:37.355873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.355891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.355900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.355917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.355927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.355945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.355956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.355974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.355983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.356571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.356667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.356677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.358190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.358228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.795 [2024-11-20 19:01:37.358260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.358288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.358316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.358343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.358370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.358396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.358424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.795 [2024-11-20 19:01:37.358441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.795 [2024-11-20 19:01:37.358451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.358617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.358689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.358700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.359750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.359859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.796 [2024-11-20 19:01:37.359869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.361166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.361187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.361214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.361225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.361243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.361254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.361272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.361282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.796 [2024-11-20 19:01:37.361299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.796 [2024-11-20 19:01:37.361310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.361786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.797 [2024-11-20 19:01:37.361980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.361997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.362007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.362024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.362035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.797 [2024-11-20 19:01:37.364596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.797 [2024-11-20 19:01:37.364604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.364956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.364991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.364999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.365140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.365160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.365920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.365940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.798 [2024-11-20 19:01:37.365962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.798 [2024-11-20 19:01:37.365975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.798 [2024-11-20 19:01:37.365983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.365997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.366005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.366026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.366379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.366406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.366427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.366447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.366468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.366489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.366510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.366531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.366552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.366566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.366574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.367760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.367782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.367803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.367824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.367927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.367946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.367959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.367967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.368439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.368461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.368485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.368505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.368526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.368548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.368569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.368589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.368610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.368631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.799 [2024-11-20 19:01:37.368651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.799 [2024-11-20 19:01:37.368671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.799 [2024-11-20 19:01:37.368684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.368839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.368942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.368963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.368985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.368998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.369980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.369994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.370002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.370014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.800 [2024-11-20 19:01:37.370022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.370035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.370042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.370055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.370062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.370075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.800 [2024-11-20 19:01:37.370083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.800 [2024-11-20 19:01:37.370098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.370595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.370616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.370637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.370787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.370830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.370850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.370863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.370871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.372159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.801 [2024-11-20 19:01:37.372181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.801 [2024-11-20 19:01:37.372324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.801 [2024-11-20 19:01:37.372332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.372352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.372395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.372416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.372518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.372540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.372622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.372636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.372644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.374757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.374780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.374800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.374827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.374847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.374866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.374886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.374907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.374926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.374946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.374967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.374984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.374991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.375013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.375034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.375053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.375075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.375096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.375117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.375137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.375158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.375178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.375198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.802 [2024-11-20 19:01:37.375225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.802 [2024-11-20 19:01:37.375239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.802 [2024-11-20 19:01:37.375249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.375271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.375292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.375312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.375333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.375353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.375412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.375472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.375686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.375694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.803 [2024-11-20 19:01:37.376849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.376873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.376898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.376918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.376939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.376960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.376980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.376993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.377001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.377969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.377986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.803 [2024-11-20 19:01:37.378153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.803 [2024-11-20 19:01:37.378161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.378953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.378988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.378996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.379274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.379296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.379316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.804 [2024-11-20 19:01:37.379359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.379380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.379401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.804 [2024-11-20 19:01:37.379414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.804 [2024-11-20 19:01:37.379421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.379435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.379443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.379927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.379942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.379956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.379965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.379979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.379986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.380416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.380429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.380437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.381141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.381164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.381185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.381355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.805 [2024-11-20 19:01:37.381375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.805 [2024-11-20 19:01:37.381416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.805 [2024-11-20 19:01:37.381429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.381539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.381585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.381605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.381661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.381669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.382692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.382826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.382833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.383582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.806 [2024-11-20 19:01:37.383601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.806 [2024-11-20 19:01:37.383627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.806 [2024-11-20 19:01:37.383636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.807 [2024-11-20 19:01:37.383866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.383982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.383990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.384002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.384011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.384023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.384031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.807 [2024-11-20 19:01:37.384044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.807 [2024-11-20 19:01:37.384052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.807 10582.22 IOPS, 41.34 MiB/s [2024-11-20T18:01:40.132Z] 10613.86 IOPS, 41.46 MiB/s [2024-11-20T18:01:40.132Z] Received shutdown signal, test time was about 28.786605 seconds 00:24:17.807 00:24:17.807 Latency(us) 00:24:17.807 [2024-11-20T18:01:40.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.807 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:17.807 Verification LBA range: start 0x0 length 0x4000 00:24:17.807 Nvme0n1 : 28.79 10634.38 41.54 0.00 0.00 12016.15 292.57 3019898.88 00:24:17.807 [2024-11-20T18:01:40.132Z] =================================================================================================================== 00:24:17.807 [2024-11-20T18:01:40.132Z] Total : 10634.38 41.54 0.00 0.00 12016.15 292.57 3019898.88 00:24:17.807 19:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.807 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.807 rmmod nvme_tcp 00:24:17.807 rmmod nvme_fabrics 00:24:17.807 rmmod nvme_keyring 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3753946 ']' 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3753946 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3753946 ']' 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3753946 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3753946 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3753946' 00:24:18.067 killing process with pid 3753946 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3753946 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3753946 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.067 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.068 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.068 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.068 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.068 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.068 19:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.604 00:24:20.604 real 0m40.788s 00:24:20.604 user 1m50.128s 00:24:20.604 sys 0m11.944s 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.604 ************************************ 00:24:20.604 END TEST nvmf_host_multipath_status 00:24:20.604 ************************************ 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.604 ************************************ 00:24:20.604 START TEST nvmf_discovery_remove_ifc 00:24:20.604 ************************************ 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:20.604 * Looking for test storage... 00:24:20.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:20.604 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.605 --rc genhtml_branch_coverage=1 00:24:20.605 --rc genhtml_function_coverage=1 00:24:20.605 --rc genhtml_legend=1 00:24:20.605 --rc geninfo_all_blocks=1 00:24:20.605 --rc geninfo_unexecuted_blocks=1 00:24:20.605 00:24:20.605 ' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.605 --rc genhtml_branch_coverage=1 00:24:20.605 --rc genhtml_function_coverage=1 00:24:20.605 --rc genhtml_legend=1 00:24:20.605 --rc geninfo_all_blocks=1 00:24:20.605 --rc geninfo_unexecuted_blocks=1 00:24:20.605 00:24:20.605 ' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.605 --rc genhtml_branch_coverage=1 00:24:20.605 --rc genhtml_function_coverage=1 00:24:20.605 --rc genhtml_legend=1 00:24:20.605 --rc geninfo_all_blocks=1 00:24:20.605 --rc geninfo_unexecuted_blocks=1 00:24:20.605 00:24:20.605 ' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.605 --rc genhtml_branch_coverage=1 00:24:20.605 --rc genhtml_function_coverage=1 00:24:20.605 --rc genhtml_legend=1 00:24:20.605 --rc geninfo_all_blocks=1 00:24:20.605 --rc geninfo_unexecuted_blocks=1 00:24:20.605 00:24:20.605 ' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.605 19:01:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:27.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:27.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.174 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:27.175 Found net devices under 0000:86:00.0: cvl_0_0 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:27.175 Found net devices under 0000:86:00.1: cvl_0_1 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:24:27.175 00:24:27.175 --- 10.0.0.2 ping statistics --- 00:24:27.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.175 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:24:27.175 00:24:27.175 --- 10.0.0.1 ping statistics --- 00:24:27.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.175 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3762957 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3762957 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3762957 ']' 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.175 19:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.175 [2024-11-20 19:01:48.626740] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:24:27.175 [2024-11-20 19:01:48.626786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.175 [2024-11-20 19:01:48.707874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.175 [2024-11-20 19:01:48.746569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.175 [2024-11-20 19:01:48.746604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.175 [2024-11-20 19:01:48.746611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.175 [2024-11-20 19:01:48.746617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.175 [2024-11-20 19:01:48.746621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.175 [2024-11-20 19:01:48.747167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.175 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.433 [2024-11-20 19:01:49.504133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.433 [2024-11-20 19:01:49.512313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:27.433 null0 00:24:27.434 [2024-11-20 19:01:49.544286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3762991 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3762991 /tmp/host.sock 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3762991 ']' 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:27.434 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.434 [2024-11-20 19:01:49.615082] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:24:27.434 [2024-11-20 19:01:49.615126] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762991 ] 00:24:27.434 [2024-11-20 19:01:49.689525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.434 [2024-11-20 19:01:49.734545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.434 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.692 19:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.627 [2024-11-20 19:01:50.912354] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:28.627 [2024-11-20 19:01:50.912378] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:28.627 [2024-11-20 19:01:50.912394] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:28.886 [2024-11-20 19:01:51.000660] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:28.886 [2024-11-20 19:01:51.101353] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:28.886 [2024-11-20 19:01:51.102187] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x13f1a10:1 started. 00:24:28.886 [2024-11-20 19:01:51.103524] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:28.886 [2024-11-20 19:01:51.103565] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:28.886 [2024-11-20 19:01:51.103584] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:28.886 [2024-11-20 19:01:51.103597] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:28.886 [2024-11-20 19:01:51.103617] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.886 [2024-11-20 19:01:51.109578] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x13f1a10 was disconnected and freed. delete nvme_qpair. 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:28.886 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.144 19:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:30.140 19:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:31.113 19:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.100 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.100 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.100 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.100 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.100 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.100 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.100 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.358 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.358 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:32.358 19:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.292 19:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.225 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.225 [2024-11-20 19:01:56.545005] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:34.225 [2024-11-20 19:01:56.545057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.225 [2024-11-20 19:01:56.545070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.225 [2024-11-20 19:01:56.545080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.225 [2024-11-20 19:01:56.545088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.225 [2024-11-20 19:01:56.545099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.225 [2024-11-20 19:01:56.545107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.225 [2024-11-20 19:01:56.545114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.225 [2024-11-20 19:01:56.545125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.225 [2024-11-20 19:01:56.545136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.225 [2024-11-20 19:01:56.545144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.225 [2024-11-20 19:01:56.545152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce220 is same with the state(6) to be set 00:24:34.483 [2024-11-20 19:01:56.555026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ce220 (9): Bad file descriptor 00:24:34.483 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.483 19:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.483 [2024-11-20 19:01:56.565062] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:34.483 [2024-11-20 19:01:56.565075] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:34.483 [2024-11-20 19:01:56.565079] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:34.483 [2024-11-20 19:01:56.565084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:34.483 [2024-11-20 19:01:56.565108] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.418 [2024-11-20 19:01:57.575239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:35.418 [2024-11-20 19:01:57.575315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ce220 with addr=10.0.0.2, port=4420 00:24:35.418 [2024-11-20 19:01:57.575351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce220 is same with the state(6) to be set 00:24:35.418 [2024-11-20 19:01:57.575419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ce220 (9): Bad file descriptor 00:24:35.418 [2024-11-20 19:01:57.576392] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:35.418 [2024-11-20 19:01:57.576461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:35.418 [2024-11-20 19:01:57.576485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:35.418 [2024-11-20 19:01:57.576508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:35.418 [2024-11-20 19:01:57.576530] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:35.418 [2024-11-20 19:01:57.576546] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:35.418 [2024-11-20 19:01:57.576559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:35.418 [2024-11-20 19:01:57.576582] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:35.418 [2024-11-20 19:01:57.576597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.418 19:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.351 [2024-11-20 19:01:58.579111] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:36.351 [2024-11-20 19:01:58.579142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:36.351 [2024-11-20 19:01:58.579158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:36.351 [2024-11-20 19:01:58.579166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:36.351 [2024-11-20 19:01:58.579173] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:36.351 [2024-11-20 19:01:58.579180] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:36.351 [2024-11-20 19:01:58.579185] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:36.351 [2024-11-20 19:01:58.579189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:36.351 [2024-11-20 19:01:58.579219] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:36.351 [2024-11-20 19:01:58.579248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.352 [2024-11-20 19:01:58.579258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.352 [2024-11-20 19:01:58.579268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.352 [2024-11-20 19:01:58.579275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.352 [2024-11-20 19:01:58.579282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.352 [2024-11-20 19:01:58.579289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.352 [2024-11-20 19:01:58.579302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.352 [2024-11-20 19:01:58.579308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.352 [2024-11-20 19:01:58.579315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.352 [2024-11-20 19:01:58.579321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.352 [2024-11-20 19:01:58.579328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:36.352 [2024-11-20 19:01:58.579673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bd900 (9): Bad file descriptor 00:24:36.352 [2024-11-20 19:01:58.580682] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:36.352 [2024-11-20 19:01:58.580693] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.352 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:36.610 19:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:37.541 19:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.471 [2024-11-20 19:02:00.595181] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:38.471 [2024-11-20 19:02:00.595206] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:38.471 [2024-11-20 19:02:00.595219] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.471 [2024-11-20 19:02:00.681515] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:38.472 [2024-11-20 19:02:00.736058] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:38.472 [2024-11-20 19:02:00.736647] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x13c2830:1 started. 00:24:38.472 [2024-11-20 19:02:00.737709] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:38.472 [2024-11-20 19:02:00.737743] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:38.472 [2024-11-20 19:02:00.737760] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:38.472 [2024-11-20 19:02:00.737773] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:38.472 [2024-11-20 19:02:00.737780] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:38.472 [2024-11-20 19:02:00.744123] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x13c2830 was disconnected and freed. delete nvme_qpair. 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3762991 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3762991 ']' 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3762991 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3762991 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3762991' 00:24:38.730 killing process with pid 3762991 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3762991 00:24:38.730 19:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3762991 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.988 rmmod nvme_tcp 00:24:38.988 rmmod nvme_fabrics 00:24:38.988 rmmod nvme_keyring 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3762957 ']' 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3762957 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3762957 ']' 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3762957 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3762957 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3762957' 00:24:38.988 killing process with pid 3762957 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3762957 00:24:38.988 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3762957 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.246 19:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.149 19:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.149 00:24:41.149 real 0m20.949s 00:24:41.149 user 0m25.375s 00:24:41.149 sys 0m5.744s 00:24:41.149 19:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.149 19:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.149 ************************************ 00:24:41.149 END TEST nvmf_discovery_remove_ifc 00:24:41.149 ************************************ 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.409 ************************************ 00:24:41.409 START TEST nvmf_identify_kernel_target 00:24:41.409 ************************************ 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:41.409 * Looking for test storage... 00:24:41.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.409 --rc genhtml_branch_coverage=1 00:24:41.409 --rc genhtml_function_coverage=1 00:24:41.409 --rc genhtml_legend=1 00:24:41.409 --rc geninfo_all_blocks=1 00:24:41.409 --rc geninfo_unexecuted_blocks=1 00:24:41.409 00:24:41.409 ' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.409 --rc genhtml_branch_coverage=1 00:24:41.409 --rc genhtml_function_coverage=1 00:24:41.409 --rc genhtml_legend=1 00:24:41.409 --rc geninfo_all_blocks=1 00:24:41.409 --rc geninfo_unexecuted_blocks=1 00:24:41.409 00:24:41.409 ' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.409 --rc genhtml_branch_coverage=1 00:24:41.409 --rc genhtml_function_coverage=1 00:24:41.409 --rc genhtml_legend=1 00:24:41.409 --rc geninfo_all_blocks=1 00:24:41.409 --rc geninfo_unexecuted_blocks=1 00:24:41.409 00:24:41.409 ' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.409 --rc genhtml_branch_coverage=1 00:24:41.409 --rc genhtml_function_coverage=1 00:24:41.409 --rc genhtml_legend=1 00:24:41.409 --rc geninfo_all_blocks=1 00:24:41.409 --rc geninfo_unexecuted_blocks=1 00:24:41.409 00:24:41.409 ' 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.409 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.410 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.669 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.669 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.669 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.669 19:02:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:48.239 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:48.239 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:48.239 Found net devices under 0000:86:00.0: cvl_0_0 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:48.239 Found net devices under 0000:86:00.1: cvl_0_1 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.239 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:24:48.240 00:24:48.240 --- 10.0.0.2 ping statistics --- 00:24:48.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.240 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:48.240 00:24:48.240 --- 10.0.0.1 ping statistics --- 00:24:48.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.240 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:48.240 19:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:50.144 Waiting for block devices as requested 00:24:50.144 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:50.403 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:50.403 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:50.660 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:50.660 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:50.660 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:50.660 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:50.919 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:50.919 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:50.919 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:51.178 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:51.178 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:51.178 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:51.437 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:51.437 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:51.437 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:51.437 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:51.697 No valid GPT data, bailing 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:51.697 00:24:51.697 Discovery Log Number of Records 2, Generation counter 2 00:24:51.697 =====Discovery Log Entry 0====== 00:24:51.697 trtype: tcp 00:24:51.697 adrfam: ipv4 00:24:51.697 subtype: current discovery subsystem 00:24:51.697 treq: not specified, sq flow control disable supported 00:24:51.697 portid: 1 00:24:51.697 trsvcid: 4420 00:24:51.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:51.697 traddr: 10.0.0.1 00:24:51.697 eflags: none 00:24:51.697 sectype: none 00:24:51.697 =====Discovery Log Entry 1====== 00:24:51.697 trtype: tcp 00:24:51.697 adrfam: ipv4 00:24:51.697 subtype: nvme subsystem 00:24:51.697 treq: not specified, sq flow control disable supported 00:24:51.697 portid: 1 00:24:51.697 trsvcid: 4420 00:24:51.697 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:51.697 traddr: 10.0.0.1 00:24:51.697 eflags: none 00:24:51.697 sectype: none 00:24:51.697 19:02:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:51.697 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:51.958 ===================================================== 00:24:51.958 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:51.958 ===================================================== 00:24:51.958 Controller Capabilities/Features 00:24:51.958 ================================ 00:24:51.958 Vendor ID: 0000 00:24:51.958 Subsystem Vendor ID: 0000 00:24:51.958 Serial Number: b4800167814a4879602b 00:24:51.958 Model Number: Linux 00:24:51.958 Firmware Version: 6.8.9-20 00:24:51.958 Recommended Arb Burst: 0 00:24:51.958 IEEE OUI Identifier: 00 00 00 00:24:51.958 Multi-path I/O 00:24:51.958 May have multiple subsystem ports: No 00:24:51.958 May have multiple controllers: No 00:24:51.958 Associated with SR-IOV VF: No 00:24:51.958 Max Data Transfer Size: Unlimited 00:24:51.959 Max Number of Namespaces: 0 00:24:51.959 Max Number of I/O Queues: 1024 00:24:51.959 NVMe Specification Version (VS): 1.3 00:24:51.959 NVMe Specification Version (Identify): 1.3 00:24:51.959 Maximum Queue Entries: 1024 00:24:51.959 Contiguous Queues Required: No 00:24:51.959 Arbitration Mechanisms Supported 00:24:51.959 Weighted Round Robin: Not Supported 00:24:51.959 Vendor Specific: Not Supported 00:24:51.959 Reset Timeout: 7500 ms 00:24:51.959 Doorbell Stride: 4 bytes 00:24:51.959 NVM Subsystem Reset: Not Supported 00:24:51.959 Command Sets Supported 00:24:51.959 NVM Command Set: Supported 00:24:51.959 Boot Partition: Not Supported 00:24:51.959 Memory Page Size Minimum: 4096 bytes 00:24:51.959 Memory Page Size Maximum: 4096 bytes 00:24:51.959 Persistent Memory Region: Not Supported 00:24:51.959 Optional Asynchronous Events Supported 00:24:51.959 Namespace Attribute Notices: Not Supported 00:24:51.959 Firmware Activation Notices: Not Supported 00:24:51.959 ANA Change Notices: Not Supported 00:24:51.959 PLE Aggregate Log Change Notices: Not Supported 00:24:51.959 LBA Status Info Alert Notices: Not Supported 00:24:51.959 EGE Aggregate Log Change Notices: Not Supported 00:24:51.959 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.959 Zone Descriptor Change Notices: Not Supported 00:24:51.959 Discovery Log Change Notices: Supported 00:24:51.959 Controller Attributes 00:24:51.959 128-bit Host Identifier: Not Supported 00:24:51.959 Non-Operational Permissive Mode: Not Supported 00:24:51.959 NVM Sets: Not Supported 00:24:51.959 Read Recovery Levels: Not Supported 00:24:51.959 Endurance Groups: Not Supported 00:24:51.959 Predictable Latency Mode: Not Supported 00:24:51.959 Traffic Based Keep ALive: Not Supported 00:24:51.959 Namespace Granularity: Not Supported 00:24:51.959 SQ Associations: Not Supported 00:24:51.959 UUID List: Not Supported 00:24:51.959 Multi-Domain Subsystem: Not Supported 00:24:51.959 Fixed Capacity Management: Not Supported 00:24:51.959 Variable Capacity Management: Not Supported 00:24:51.959 Delete Endurance Group: Not Supported 00:24:51.959 Delete NVM Set: Not Supported 00:24:51.959 Extended LBA Formats Supported: Not Supported 00:24:51.959 Flexible Data Placement Supported: Not Supported 00:24:51.959 00:24:51.959 Controller Memory Buffer Support 00:24:51.959 ================================ 00:24:51.959 Supported: No 00:24:51.959 00:24:51.959 Persistent Memory Region Support 00:24:51.959 ================================ 00:24:51.959 Supported: No 00:24:51.959 00:24:51.959 Admin Command Set Attributes 00:24:51.959 ============================ 00:24:51.959 Security Send/Receive: Not Supported 00:24:51.959 Format NVM: Not Supported 00:24:51.959 Firmware Activate/Download: Not Supported 00:24:51.959 Namespace Management: Not Supported 00:24:51.959 Device Self-Test: Not Supported 00:24:51.959 Directives: Not Supported 00:24:51.959 NVMe-MI: Not Supported 00:24:51.959 Virtualization Management: Not Supported 00:24:51.959 Doorbell Buffer Config: Not Supported 00:24:51.959 Get LBA Status Capability: Not Supported 00:24:51.959 Command & Feature Lockdown Capability: Not Supported 00:24:51.959 Abort Command Limit: 1 00:24:51.959 Async Event Request Limit: 1 00:24:51.959 Number of Firmware Slots: N/A 00:24:51.959 Firmware Slot 1 Read-Only: N/A 00:24:51.959 Firmware Activation Without Reset: N/A 00:24:51.959 Multiple Update Detection Support: N/A 00:24:51.959 Firmware Update Granularity: No Information Provided 00:24:51.959 Per-Namespace SMART Log: No 00:24:51.959 Asymmetric Namespace Access Log Page: Not Supported 00:24:51.959 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:51.959 Command Effects Log Page: Not Supported 00:24:51.959 Get Log Page Extended Data: Supported 00:24:51.959 Telemetry Log Pages: Not Supported 00:24:51.959 Persistent Event Log Pages: Not Supported 00:24:51.959 Supported Log Pages Log Page: May Support 00:24:51.959 Commands Supported & Effects Log Page: Not Supported 00:24:51.959 Feature Identifiers & Effects Log Page:May Support 00:24:51.959 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.959 Data Area 4 for Telemetry Log: Not Supported 00:24:51.959 Error Log Page Entries Supported: 1 00:24:51.959 Keep Alive: Not Supported 00:24:51.959 00:24:51.959 NVM Command Set Attributes 00:24:51.959 ========================== 00:24:51.959 Submission Queue Entry Size 00:24:51.959 Max: 1 00:24:51.959 Min: 1 00:24:51.959 Completion Queue Entry Size 00:24:51.959 Max: 1 00:24:51.959 Min: 1 00:24:51.959 Number of Namespaces: 0 00:24:51.959 Compare Command: Not Supported 00:24:51.959 Write Uncorrectable Command: Not Supported 00:24:51.959 Dataset Management Command: Not Supported 00:24:51.959 Write Zeroes Command: Not Supported 00:24:51.959 Set Features Save Field: Not Supported 00:24:51.959 Reservations: Not Supported 00:24:51.959 Timestamp: Not Supported 00:24:51.959 Copy: Not Supported 00:24:51.959 Volatile Write Cache: Not Present 00:24:51.959 Atomic Write Unit (Normal): 1 00:24:51.959 Atomic Write Unit (PFail): 1 00:24:51.959 Atomic Compare & Write Unit: 1 00:24:51.959 Fused Compare & Write: Not Supported 00:24:51.959 Scatter-Gather List 00:24:51.959 SGL Command Set: Supported 00:24:51.959 SGL Keyed: Not Supported 00:24:51.959 SGL Bit Bucket Descriptor: Not Supported 00:24:51.959 SGL Metadata Pointer: Not Supported 00:24:51.959 Oversized SGL: Not Supported 00:24:51.959 SGL Metadata Address: Not Supported 00:24:51.959 SGL Offset: Supported 00:24:51.959 Transport SGL Data Block: Not Supported 00:24:51.959 Replay Protected Memory Block: Not Supported 00:24:51.959 00:24:51.959 Firmware Slot Information 00:24:51.959 ========================= 00:24:51.959 Active slot: 0 00:24:51.959 00:24:51.959 00:24:51.959 Error Log 00:24:51.959 ========= 00:24:51.959 00:24:51.959 Active Namespaces 00:24:51.959 ================= 00:24:51.959 Discovery Log Page 00:24:51.959 ================== 00:24:51.959 Generation Counter: 2 00:24:51.959 Number of Records: 2 00:24:51.959 Record Format: 0 00:24:51.959 00:24:51.959 Discovery Log Entry 0 00:24:51.959 ---------------------- 00:24:51.959 Transport Type: 3 (TCP) 00:24:51.959 Address Family: 1 (IPv4) 00:24:51.959 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:51.959 Entry Flags: 00:24:51.959 Duplicate Returned Information: 0 00:24:51.959 Explicit Persistent Connection Support for Discovery: 0 00:24:51.959 Transport Requirements: 00:24:51.959 Secure Channel: Not Specified 00:24:51.959 Port ID: 1 (0x0001) 00:24:51.959 Controller ID: 65535 (0xffff) 00:24:51.959 Admin Max SQ Size: 32 00:24:51.959 Transport Service Identifier: 4420 00:24:51.959 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:51.959 Transport Address: 10.0.0.1 00:24:51.959 Discovery Log Entry 1 00:24:51.959 ---------------------- 00:24:51.959 Transport Type: 3 (TCP) 00:24:51.959 Address Family: 1 (IPv4) 00:24:51.959 Subsystem Type: 2 (NVM Subsystem) 00:24:51.959 Entry Flags: 00:24:51.959 Duplicate Returned Information: 0 00:24:51.959 Explicit Persistent Connection Support for Discovery: 0 00:24:51.959 Transport Requirements: 00:24:51.959 Secure Channel: Not Specified 00:24:51.959 Port ID: 1 (0x0001) 00:24:51.959 Controller ID: 65535 (0xffff) 00:24:51.959 Admin Max SQ Size: 32 00:24:51.959 Transport Service Identifier: 4420 00:24:51.959 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:51.959 Transport Address: 10.0.0.1 00:24:51.959 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:51.959 get_feature(0x01) failed 00:24:51.959 get_feature(0x02) failed 00:24:51.959 get_feature(0x04) failed 00:24:51.959 ===================================================== 00:24:51.959 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:51.959 ===================================================== 00:24:51.959 Controller Capabilities/Features 00:24:51.959 ================================ 00:24:51.959 Vendor ID: 0000 00:24:51.959 Subsystem Vendor ID: 0000 00:24:51.959 Serial Number: bd926d32f071e77ab667 00:24:51.959 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:51.959 Firmware Version: 6.8.9-20 00:24:51.959 Recommended Arb Burst: 6 00:24:51.959 IEEE OUI Identifier: 00 00 00 00:24:51.959 Multi-path I/O 00:24:51.959 May have multiple subsystem ports: Yes 00:24:51.959 May have multiple controllers: Yes 00:24:51.959 Associated with SR-IOV VF: No 00:24:51.960 Max Data Transfer Size: Unlimited 00:24:51.960 Max Number of Namespaces: 1024 00:24:51.960 Max Number of I/O Queues: 128 00:24:51.960 NVMe Specification Version (VS): 1.3 00:24:51.960 NVMe Specification Version (Identify): 1.3 00:24:51.960 Maximum Queue Entries: 1024 00:24:51.960 Contiguous Queues Required: No 00:24:51.960 Arbitration Mechanisms Supported 00:24:51.960 Weighted Round Robin: Not Supported 00:24:51.960 Vendor Specific: Not Supported 00:24:51.960 Reset Timeout: 7500 ms 00:24:51.960 Doorbell Stride: 4 bytes 00:24:51.960 NVM Subsystem Reset: Not Supported 00:24:51.960 Command Sets Supported 00:24:51.960 NVM Command Set: Supported 00:24:51.960 Boot Partition: Not Supported 00:24:51.960 Memory Page Size Minimum: 4096 bytes 00:24:51.960 Memory Page Size Maximum: 4096 bytes 00:24:51.960 Persistent Memory Region: Not Supported 00:24:51.960 Optional Asynchronous Events Supported 00:24:51.960 Namespace Attribute Notices: Supported 00:24:51.960 Firmware Activation Notices: Not Supported 00:24:51.960 ANA Change Notices: Supported 00:24:51.960 PLE Aggregate Log Change Notices: Not Supported 00:24:51.960 LBA Status Info Alert Notices: Not Supported 00:24:51.960 EGE Aggregate Log Change Notices: Not Supported 00:24:51.960 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.960 Zone Descriptor Change Notices: Not Supported 00:24:51.960 Discovery Log Change Notices: Not Supported 00:24:51.960 Controller Attributes 00:24:51.960 128-bit Host Identifier: Supported 00:24:51.960 Non-Operational Permissive Mode: Not Supported 00:24:51.960 NVM Sets: Not Supported 00:24:51.960 Read Recovery Levels: Not Supported 00:24:51.960 Endurance Groups: Not Supported 00:24:51.960 Predictable Latency Mode: Not Supported 00:24:51.960 Traffic Based Keep ALive: Supported 00:24:51.960 Namespace Granularity: Not Supported 00:24:51.960 SQ Associations: Not Supported 00:24:51.960 UUID List: Not Supported 00:24:51.960 Multi-Domain Subsystem: Not Supported 00:24:51.960 Fixed Capacity Management: Not Supported 00:24:51.960 Variable Capacity Management: Not Supported 00:24:51.960 Delete Endurance Group: Not Supported 00:24:51.960 Delete NVM Set: Not Supported 00:24:51.960 Extended LBA Formats Supported: Not Supported 00:24:51.960 Flexible Data Placement Supported: Not Supported 00:24:51.960 00:24:51.960 Controller Memory Buffer Support 00:24:51.960 ================================ 00:24:51.960 Supported: No 00:24:51.960 00:24:51.960 Persistent Memory Region Support 00:24:51.960 ================================ 00:24:51.960 Supported: No 00:24:51.960 00:24:51.960 Admin Command Set Attributes 00:24:51.960 ============================ 00:24:51.960 Security Send/Receive: Not Supported 00:24:51.960 Format NVM: Not Supported 00:24:51.960 Firmware Activate/Download: Not Supported 00:24:51.960 Namespace Management: Not Supported 00:24:51.960 Device Self-Test: Not Supported 00:24:51.960 Directives: Not Supported 00:24:51.960 NVMe-MI: Not Supported 00:24:51.960 Virtualization Management: Not Supported 00:24:51.960 Doorbell Buffer Config: Not Supported 00:24:51.960 Get LBA Status Capability: Not Supported 00:24:51.960 Command & Feature Lockdown Capability: Not Supported 00:24:51.960 Abort Command Limit: 4 00:24:51.960 Async Event Request Limit: 4 00:24:51.960 Number of Firmware Slots: N/A 00:24:51.960 Firmware Slot 1 Read-Only: N/A 00:24:51.960 Firmware Activation Without Reset: N/A 00:24:51.960 Multiple Update Detection Support: N/A 00:24:51.960 Firmware Update Granularity: No Information Provided 00:24:51.960 Per-Namespace SMART Log: Yes 00:24:51.960 Asymmetric Namespace Access Log Page: Supported 00:24:51.960 ANA Transition Time : 10 sec 00:24:51.960 00:24:51.960 Asymmetric Namespace Access Capabilities 00:24:51.960 ANA Optimized State : Supported 00:24:51.960 ANA Non-Optimized State : Supported 00:24:51.960 ANA Inaccessible State : Supported 00:24:51.960 ANA Persistent Loss State : Supported 00:24:51.960 ANA Change State : Supported 00:24:51.960 ANAGRPID is not changed : No 00:24:51.960 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:51.960 00:24:51.960 ANA Group Identifier Maximum : 128 00:24:51.960 Number of ANA Group Identifiers : 128 00:24:51.960 Max Number of Allowed Namespaces : 1024 00:24:51.960 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:51.960 Command Effects Log Page: Supported 00:24:51.960 Get Log Page Extended Data: Supported 00:24:51.960 Telemetry Log Pages: Not Supported 00:24:51.960 Persistent Event Log Pages: Not Supported 00:24:51.960 Supported Log Pages Log Page: May Support 00:24:51.960 Commands Supported & Effects Log Page: Not Supported 00:24:51.960 Feature Identifiers & Effects Log Page:May Support 00:24:51.960 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.960 Data Area 4 for Telemetry Log: Not Supported 00:24:51.960 Error Log Page Entries Supported: 128 00:24:51.960 Keep Alive: Supported 00:24:51.960 Keep Alive Granularity: 1000 ms 00:24:51.960 00:24:51.960 NVM Command Set Attributes 00:24:51.960 ========================== 00:24:51.960 Submission Queue Entry Size 00:24:51.960 Max: 64 00:24:51.960 Min: 64 00:24:51.960 Completion Queue Entry Size 00:24:51.960 Max: 16 00:24:51.960 Min: 16 00:24:51.960 Number of Namespaces: 1024 00:24:51.960 Compare Command: Not Supported 00:24:51.960 Write Uncorrectable Command: Not Supported 00:24:51.960 Dataset Management Command: Supported 00:24:51.960 Write Zeroes Command: Supported 00:24:51.960 Set Features Save Field: Not Supported 00:24:51.960 Reservations: Not Supported 00:24:51.960 Timestamp: Not Supported 00:24:51.960 Copy: Not Supported 00:24:51.960 Volatile Write Cache: Present 00:24:51.960 Atomic Write Unit (Normal): 1 00:24:51.960 Atomic Write Unit (PFail): 1 00:24:51.960 Atomic Compare & Write Unit: 1 00:24:51.960 Fused Compare & Write: Not Supported 00:24:51.960 Scatter-Gather List 00:24:51.960 SGL Command Set: Supported 00:24:51.960 SGL Keyed: Not Supported 00:24:51.960 SGL Bit Bucket Descriptor: Not Supported 00:24:51.960 SGL Metadata Pointer: Not Supported 00:24:51.960 Oversized SGL: Not Supported 00:24:51.960 SGL Metadata Address: Not Supported 00:24:51.960 SGL Offset: Supported 00:24:51.960 Transport SGL Data Block: Not Supported 00:24:51.960 Replay Protected Memory Block: Not Supported 00:24:51.960 00:24:51.960 Firmware Slot Information 00:24:51.960 ========================= 00:24:51.960 Active slot: 0 00:24:51.960 00:24:51.960 Asymmetric Namespace Access 00:24:51.960 =========================== 00:24:51.960 Change Count : 0 00:24:51.960 Number of ANA Group Descriptors : 1 00:24:51.960 ANA Group Descriptor : 0 00:24:51.960 ANA Group ID : 1 00:24:51.960 Number of NSID Values : 1 00:24:51.960 Change Count : 0 00:24:51.960 ANA State : 1 00:24:51.960 Namespace Identifier : 1 00:24:51.960 00:24:51.960 Commands Supported and Effects 00:24:51.960 ============================== 00:24:51.960 Admin Commands 00:24:51.960 -------------- 00:24:51.960 Get Log Page (02h): Supported 00:24:51.960 Identify (06h): Supported 00:24:51.960 Abort (08h): Supported 00:24:51.960 Set Features (09h): Supported 00:24:51.960 Get Features (0Ah): Supported 00:24:51.960 Asynchronous Event Request (0Ch): Supported 00:24:51.960 Keep Alive (18h): Supported 00:24:51.960 I/O Commands 00:24:51.960 ------------ 00:24:51.960 Flush (00h): Supported 00:24:51.960 Write (01h): Supported LBA-Change 00:24:51.960 Read (02h): Supported 00:24:51.960 Write Zeroes (08h): Supported LBA-Change 00:24:51.960 Dataset Management (09h): Supported 00:24:51.960 00:24:51.960 Error Log 00:24:51.960 ========= 00:24:51.960 Entry: 0 00:24:51.960 Error Count: 0x3 00:24:51.960 Submission Queue Id: 0x0 00:24:51.960 Command Id: 0x5 00:24:51.960 Phase Bit: 0 00:24:51.960 Status Code: 0x2 00:24:51.960 Status Code Type: 0x0 00:24:51.960 Do Not Retry: 1 00:24:51.960 Error Location: 0x28 00:24:51.960 LBA: 0x0 00:24:51.960 Namespace: 0x0 00:24:51.960 Vendor Log Page: 0x0 00:24:51.960 ----------- 00:24:51.960 Entry: 1 00:24:51.960 Error Count: 0x2 00:24:51.960 Submission Queue Id: 0x0 00:24:51.960 Command Id: 0x5 00:24:51.960 Phase Bit: 0 00:24:51.960 Status Code: 0x2 00:24:51.960 Status Code Type: 0x0 00:24:51.960 Do Not Retry: 1 00:24:51.960 Error Location: 0x28 00:24:51.960 LBA: 0x0 00:24:51.960 Namespace: 0x0 00:24:51.960 Vendor Log Page: 0x0 00:24:51.960 ----------- 00:24:51.960 Entry: 2 00:24:51.960 Error Count: 0x1 00:24:51.960 Submission Queue Id: 0x0 00:24:51.960 Command Id: 0x4 00:24:51.960 Phase Bit: 0 00:24:51.961 Status Code: 0x2 00:24:51.961 Status Code Type: 0x0 00:24:51.961 Do Not Retry: 1 00:24:51.961 Error Location: 0x28 00:24:51.961 LBA: 0x0 00:24:51.961 Namespace: 0x0 00:24:51.961 Vendor Log Page: 0x0 00:24:51.961 00:24:51.961 Number of Queues 00:24:51.961 ================ 00:24:51.961 Number of I/O Submission Queues: 128 00:24:51.961 Number of I/O Completion Queues: 128 00:24:51.961 00:24:51.961 ZNS Specific Controller Data 00:24:51.961 ============================ 00:24:51.961 Zone Append Size Limit: 0 00:24:51.961 00:24:51.961 00:24:51.961 Active Namespaces 00:24:51.961 ================= 00:24:51.961 get_feature(0x05) failed 00:24:51.961 Namespace ID:1 00:24:51.961 Command Set Identifier: NVM (00h) 00:24:51.961 Deallocate: Supported 00:24:51.961 Deallocated/Unwritten Error: Not Supported 00:24:51.961 Deallocated Read Value: Unknown 00:24:51.961 Deallocate in Write Zeroes: Not Supported 00:24:51.961 Deallocated Guard Field: 0xFFFF 00:24:51.961 Flush: Supported 00:24:51.961 Reservation: Not Supported 00:24:51.961 Namespace Sharing Capabilities: Multiple Controllers 00:24:51.961 Size (in LBAs): 3125627568 (1490GiB) 00:24:51.961 Capacity (in LBAs): 3125627568 (1490GiB) 00:24:51.961 Utilization (in LBAs): 3125627568 (1490GiB) 00:24:51.961 UUID: cf9e0aa8-6c65-4e7f-8a60-7280f5239604 00:24:51.961 Thin Provisioning: Not Supported 00:24:51.961 Per-NS Atomic Units: Yes 00:24:51.961 Atomic Boundary Size (Normal): 0 00:24:51.961 Atomic Boundary Size (PFail): 0 00:24:51.961 Atomic Boundary Offset: 0 00:24:51.961 NGUID/EUI64 Never Reused: No 00:24:51.961 ANA group ID: 1 00:24:51.961 Namespace Write Protected: No 00:24:51.961 Number of LBA Formats: 1 00:24:51.961 Current LBA Format: LBA Format #00 00:24:51.961 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:51.961 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.961 rmmod nvme_tcp 00:24:51.961 rmmod nvme_fabrics 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.961 19:02:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:54.494 19:02:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:57.033 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:57.033 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:58.416 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:58.416 00:24:58.416 real 0m17.224s 00:24:58.416 user 0m4.316s 00:24:58.416 sys 0m8.772s 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.675 ************************************ 00:24:58.675 END TEST nvmf_identify_kernel_target 00:24:58.675 ************************************ 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.675 ************************************ 00:24:58.675 START TEST nvmf_auth_host 00:24:58.675 ************************************ 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.675 * Looking for test storage... 00:24:58.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.675 --rc genhtml_branch_coverage=1 00:24:58.675 --rc genhtml_function_coverage=1 00:24:58.675 --rc genhtml_legend=1 00:24:58.675 --rc geninfo_all_blocks=1 00:24:58.675 --rc geninfo_unexecuted_blocks=1 00:24:58.675 00:24:58.675 ' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.675 --rc genhtml_branch_coverage=1 00:24:58.675 --rc genhtml_function_coverage=1 00:24:58.675 --rc genhtml_legend=1 00:24:58.675 --rc geninfo_all_blocks=1 00:24:58.675 --rc geninfo_unexecuted_blocks=1 00:24:58.675 00:24:58.675 ' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.675 --rc genhtml_branch_coverage=1 00:24:58.675 --rc genhtml_function_coverage=1 00:24:58.675 --rc genhtml_legend=1 00:24:58.675 --rc geninfo_all_blocks=1 00:24:58.675 --rc geninfo_unexecuted_blocks=1 00:24:58.675 00:24:58.675 ' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.675 --rc genhtml_branch_coverage=1 00:24:58.675 --rc genhtml_function_coverage=1 00:24:58.675 --rc genhtml_legend=1 00:24:58.675 --rc geninfo_all_blocks=1 00:24:58.675 --rc geninfo_unexecuted_blocks=1 00:24:58.675 00:24:58.675 ' 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.675 19:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.934 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.935 19:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:05.506 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:05.506 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:05.506 Found net devices under 0000:86:00.0: cvl_0_0 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:05.506 Found net devices under 0000:86:00.1: cvl_0_1 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.506 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:25:05.507 00:25:05.507 --- 10.0.0.2 ping statistics --- 00:25:05.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.507 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:25:05.507 00:25:05.507 --- 10.0.0.1 ping statistics --- 00:25:05.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.507 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3774981 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3774981 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3774981 ']' 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.507 19:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=db97f978ac54e328bc1d06bdea6ead2d 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rZZ 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key db97f978ac54e328bc1d06bdea6ead2d 0 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 db97f978ac54e328bc1d06bdea6ead2d 0 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=db97f978ac54e328bc1d06bdea6ead2d 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rZZ 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rZZ 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rZZ 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7431263d43250db445767cb6efb0bb9924e44c39c21fb1347b7c858908852be8 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cGp 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7431263d43250db445767cb6efb0bb9924e44c39c21fb1347b7c858908852be8 3 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7431263d43250db445767cb6efb0bb9924e44c39c21fb1347b7c858908852be8 3 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7431263d43250db445767cb6efb0bb9924e44c39c21fb1347b7c858908852be8 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cGp 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cGp 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cGp 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5746d08ed2482d3798c4502833c2706e31c8fb590c75b0f1 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Rpj 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5746d08ed2482d3798c4502833c2706e31c8fb590c75b0f1 0 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5746d08ed2482d3798c4502833c2706e31c8fb590c75b0f1 0 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5746d08ed2482d3798c4502833c2706e31c8fb590c75b0f1 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Rpj 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Rpj 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Rpj 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7f5a256481a9923f15a97657e9443adc7df9eba0b520d19e 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0J5 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7f5a256481a9923f15a97657e9443adc7df9eba0b520d19e 2 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7f5a256481a9923f15a97657e9443adc7df9eba0b520d19e 2 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.507 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7f5a256481a9923f15a97657e9443adc7df9eba0b520d19e 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0J5 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0J5 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0J5 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=45558355eba595b420edba622f617863 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Qd6 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 45558355eba595b420edba622f617863 1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 45558355eba595b420edba622f617863 1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=45558355eba595b420edba622f617863 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Qd6 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Qd6 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Qd6 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6504d5bcf9dc29e34f5f4053cf2ed002 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JoI 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6504d5bcf9dc29e34f5f4053cf2ed002 1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6504d5bcf9dc29e34f5f4053cf2ed002 1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6504d5bcf9dc29e34f5f4053cf2ed002 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JoI 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JoI 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JoI 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68aefeeb83d40cb2f0627a01774a396e47a73ed9b8b2c905 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yRt 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68aefeeb83d40cb2f0627a01774a396e47a73ed9b8b2c905 2 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68aefeeb83d40cb2f0627a01774a396e47a73ed9b8b2c905 2 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68aefeeb83d40cb2f0627a01774a396e47a73ed9b8b2c905 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yRt 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yRt 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yRt 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f938da73962e18151fcce6c0f2f40511 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oke 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f938da73962e18151fcce6c0f2f40511 0 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f938da73962e18151fcce6c0f2f40511 0 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f938da73962e18151fcce6c0f2f40511 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oke 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oke 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oke 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9fa9fa186e89cac2608eb23e0cbd066432f8b1ffc6f7a243c68f07a8b7f76363 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rTZ 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9fa9fa186e89cac2608eb23e0cbd066432f8b1ffc6f7a243c68f07a8b7f76363 3 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9fa9fa186e89cac2608eb23e0cbd066432f8b1ffc6f7a243c68f07a8b7f76363 3 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9fa9fa186e89cac2608eb23e0cbd066432f8b1ffc6f7a243c68f07a8b7f76363 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rTZ 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rTZ 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rTZ 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3774981 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3774981 ']' 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.508 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.509 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.509 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.768 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:05.768 19:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rZZ 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cGp ]] 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cGp 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Rpj 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0J5 ]] 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0J5 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Qd6 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JoI ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JoI 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yRt 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oke ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oke 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rTZ 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:05.769 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:06.028 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:06.028 19:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:08.567 Waiting for block devices as requested 00:25:08.567 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:08.826 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:08.826 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:08.826 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:08.826 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.085 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.085 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:09.085 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:09.085 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:09.344 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.344 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.344 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:09.601 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.601 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.601 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:09.601 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:09.859 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:10.427 No valid GPT data, bailing 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:10.427 00:25:10.427 Discovery Log Number of Records 2, Generation counter 2 00:25:10.427 =====Discovery Log Entry 0====== 00:25:10.427 trtype: tcp 00:25:10.427 adrfam: ipv4 00:25:10.427 subtype: current discovery subsystem 00:25:10.427 treq: not specified, sq flow control disable supported 00:25:10.427 portid: 1 00:25:10.427 trsvcid: 4420 00:25:10.427 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:10.427 traddr: 10.0.0.1 00:25:10.427 eflags: none 00:25:10.427 sectype: none 00:25:10.427 =====Discovery Log Entry 1====== 00:25:10.427 trtype: tcp 00:25:10.427 adrfam: ipv4 00:25:10.427 subtype: nvme subsystem 00:25:10.427 treq: not specified, sq flow control disable supported 00:25:10.427 portid: 1 00:25:10.427 trsvcid: 4420 00:25:10.427 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:10.427 traddr: 10.0.0.1 00:25:10.427 eflags: none 00:25:10.427 sectype: none 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:10.427 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.687 nvme0n1 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.687 19:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.687 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.946 nvme0n1 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.947 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.206 nvme0n1 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.206 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.465 nvme0n1 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.465 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.466 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.725 nvme0n1 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.725 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.726 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.726 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.726 19:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.985 nvme0n1 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.985 nvme0n1 00:25:11.985 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.244 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.245 nvme0n1 00:25:12.245 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.504 nvme0n1 00:25:12.504 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.763 19:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.763 nvme0n1 00:25:12.763 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.023 nvme0n1 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.023 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.282 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.542 nvme0n1 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.542 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.801 nvme0n1 00:25:13.801 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.801 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.801 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.801 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.801 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.801 19:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.801 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.802 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.061 nvme0n1 00:25:14.061 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.062 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.322 nvme0n1 00:25:14.322 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.322 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.322 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.322 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.322 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.322 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:14.581 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.582 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.841 nvme0n1 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.841 19:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.841 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.842 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.842 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.842 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.842 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.101 nvme0n1 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.101 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.360 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.619 nvme0n1 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.619 19:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.186 nvme0n1 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:16.186 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.187 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.445 nvme0n1 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:16.445 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.446 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.704 19:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.963 nvme0n1 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.963 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.530 nvme0n1 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.530 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.789 19:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.356 nvme0n1 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.356 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.357 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.357 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.357 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.357 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.357 19:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.924 nvme0n1 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:18.924 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.925 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.569 nvme0n1 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.569 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.570 19:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.169 nvme0n1 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.169 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.170 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.430 nvme0n1 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.430 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.690 nvme0n1 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:20.690 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.691 19:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.950 nvme0n1 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.950 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.210 nvme0n1 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.210 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.211 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.471 nvme0n1 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.471 nvme0n1 00:25:21.471 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.731 19:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.731 nvme0n1 00:25:21.731 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.731 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.731 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.731 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.731 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.991 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.992 nvme0n1 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.992 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.252 nvme0n1 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.252 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.512 nvme0n1 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.512 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.771 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.772 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.772 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.772 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.772 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.772 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.772 19:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.031 nvme0n1 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.031 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.032 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.032 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.032 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.291 nvme0n1 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.291 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.549 nvme0n1 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.549 19:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.807 nvme0n1 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.807 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:24.066 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.067 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.325 nvme0n1 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.325 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.326 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.584 nvme0n1 00:25:24.584 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.584 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.584 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.584 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.584 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.584 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.843 19:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.102 nvme0n1 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.102 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.670 nvme0n1 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.670 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.671 19:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.929 nvme0n1 00:25:25.929 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.929 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.929 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.929 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.929 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.188 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.189 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.447 nvme0n1 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.447 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.448 19:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.014 nvme0n1 00:25:27.014 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.014 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.014 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.014 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.014 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.014 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.273 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.841 nvme0n1 00:25:27.841 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.841 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.841 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.841 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.841 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.841 19:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:27.841 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.842 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.410 nvme0n1 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.410 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.411 19:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.978 nvme0n1 00:25:28.978 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.978 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.978 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.978 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.978 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.237 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.806 nvme0n1 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.806 19:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.806 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.066 nvme0n1 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.066 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.066 nvme0n1 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.326 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.327 nvme0n1 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.327 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.586 nvme0n1 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.586 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.587 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.587 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.587 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.587 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.846 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.846 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.846 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.846 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.847 19:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.847 nvme0n1 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.847 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.107 nvme0n1 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.107 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.367 nvme0n1 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.367 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.627 nvme0n1 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.627 19:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.887 nvme0n1 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.887 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.146 nvme0n1 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.146 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.405 nvme0n1 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.405 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.664 nvme0n1 00:25:32.664 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.664 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.664 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.664 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.664 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.924 19:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.924 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.183 nvme0n1 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.184 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.443 nvme0n1 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.443 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.444 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.703 nvme0n1 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.703 19:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.703 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.962 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.222 nvme0n1 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.222 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.791 nvme0n1 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.791 19:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.051 nvme0n1 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.051 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.619 nvme0n1 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.619 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.620 19:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.878 nvme0n1 00:25:35.878 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.878 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.878 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.878 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.878 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI5N2Y5NzhhYzU0ZTMyOGJjMWQwNmJkZWE2ZWFkMmSoSm/T: 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzMTI2M2Q0MzI1MGRiNDQ1NzY3Y2I2ZWZiMGJiOTkyNGU0NGMzOWMyMWZiMTM0N2I3Yzg1ODkwODg1MmJlOLDJzVg=: 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.138 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 nvme0n1 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.706 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.707 19:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.274 nvme0n1 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.274 19:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.841 nvme0n1 00:25:37.841 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.841 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.842 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjhhZWZlZWI4M2Q0MGNiMmYwNjI3YTAxNzc0YTM5NmU0N2E3M2VkOWI4YjJjOTA180ALVg==: 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: ]] 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjkzOGRhNzM5NjJlMTgxNTFmY2NlNmMwZjJmNDA1MTHmRzF4: 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.101 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.670 nvme0n1 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWZhOWZhMTg2ZTg5Y2FjMjYwOGViMjNlMGNiZDA2NjQzMmY4YjFmZmM2ZjdhMjQzYzY4ZjA3YThiN2Y3NjM2MyfNs4c=: 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.670 19:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.239 nvme0n1 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.239 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.499 request: 00:25:39.499 { 00:25:39.499 "name": "nvme0", 00:25:39.499 "trtype": "tcp", 00:25:39.499 "traddr": "10.0.0.1", 00:25:39.499 "adrfam": "ipv4", 00:25:39.499 "trsvcid": "4420", 00:25:39.499 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.499 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.499 "prchk_reftag": false, 00:25:39.499 "prchk_guard": false, 00:25:39.499 "hdgst": false, 00:25:39.499 "ddgst": false, 00:25:39.499 "allow_unrecognized_csi": false, 00:25:39.499 "method": "bdev_nvme_attach_controller", 00:25:39.499 "req_id": 1 00:25:39.499 } 00:25:39.499 Got JSON-RPC error response 00:25:39.499 response: 00:25:39.499 { 00:25:39.499 "code": -5, 00:25:39.499 "message": "Input/output error" 00:25:39.499 } 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.499 request: 00:25:39.499 { 00:25:39.499 "name": "nvme0", 00:25:39.499 "trtype": "tcp", 00:25:39.499 "traddr": "10.0.0.1", 00:25:39.499 "adrfam": "ipv4", 00:25:39.499 "trsvcid": "4420", 00:25:39.499 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.499 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.499 "prchk_reftag": false, 00:25:39.499 "prchk_guard": false, 00:25:39.499 "hdgst": false, 00:25:39.499 "ddgst": false, 00:25:39.499 "dhchap_key": "key2", 00:25:39.499 "allow_unrecognized_csi": false, 00:25:39.499 "method": "bdev_nvme_attach_controller", 00:25:39.499 "req_id": 1 00:25:39.499 } 00:25:39.499 Got JSON-RPC error response 00:25:39.499 response: 00:25:39.499 { 00:25:39.499 "code": -5, 00:25:39.499 "message": "Input/output error" 00:25:39.499 } 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.499 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.500 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.759 request: 00:25:39.759 { 00:25:39.759 "name": "nvme0", 00:25:39.759 "trtype": "tcp", 00:25:39.759 "traddr": "10.0.0.1", 00:25:39.759 "adrfam": "ipv4", 00:25:39.759 "trsvcid": "4420", 00:25:39.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.759 "prchk_reftag": false, 00:25:39.759 "prchk_guard": false, 00:25:39.759 "hdgst": false, 00:25:39.759 "ddgst": false, 00:25:39.759 "dhchap_key": "key1", 00:25:39.759 "dhchap_ctrlr_key": "ckey2", 00:25:39.759 "allow_unrecognized_csi": false, 00:25:39.759 "method": "bdev_nvme_attach_controller", 00:25:39.759 "req_id": 1 00:25:39.759 } 00:25:39.759 Got JSON-RPC error response 00:25:39.759 response: 00:25:39.759 { 00:25:39.759 "code": -5, 00:25:39.759 "message": "Input/output error" 00:25:39.759 } 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.759 nvme0n1 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:39.759 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.760 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.760 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:39.760 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:39.760 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:39.760 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.760 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.760 19:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.760 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.760 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.760 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:39.760 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.760 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.760 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.019 request: 00:25:40.019 { 00:25:40.019 "name": "nvme0", 00:25:40.019 "dhchap_key": "key1", 00:25:40.019 "dhchap_ctrlr_key": "ckey2", 00:25:40.019 "method": "bdev_nvme_set_keys", 00:25:40.019 "req_id": 1 00:25:40.019 } 00:25:40.019 Got JSON-RPC error response 00:25:40.019 response: 00:25:40.019 { 00:25:40.019 "code": -13, 00:25:40.019 "message": "Permission denied" 00:25:40.019 } 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:40.019 19:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:40.956 19:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.956 19:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:40.956 19:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.956 19:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.956 19:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.956 19:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:40.956 19:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTc0NmQwOGVkMjQ4MmQzNzk4YzQ1MDI4MzNjMjcwNmUzMWM4ZmI1OTBjNzViMGYxxNdyOg==: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y1YTI1NjQ4MWE5OTIzZjE1YTk3NjU3ZTk0NDNhZGM3ZGY5ZWJhMGI1MjBkMTllst1vbQ==: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.334 nvme0n1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU1NTgzNTVlYmE1OTViNDIwZWRiYTYyMmY2MTc4NjP7Te+n: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUwNGQ1YmNmOWRjMjllMzRmNWY0MDUzY2YyZWQwMDIaYc7i: 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.334 request: 00:25:42.334 { 00:25:42.334 "name": "nvme0", 00:25:42.334 "dhchap_key": "key2", 00:25:42.334 "dhchap_ctrlr_key": "ckey1", 00:25:42.334 "method": "bdev_nvme_set_keys", 00:25:42.334 "req_id": 1 00:25:42.334 } 00:25:42.334 Got JSON-RPC error response 00:25:42.334 response: 00:25:42.334 { 00:25:42.334 "code": -13, 00:25:42.334 "message": "Permission denied" 00:25:42.334 } 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:42.334 19:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:43.270 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.270 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:43.270 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.270 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.270 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.529 rmmod nvme_tcp 00:25:43.529 rmmod nvme_fabrics 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3774981 ']' 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3774981 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3774981 ']' 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3774981 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3774981 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3774981' 00:25:43.529 killing process with pid 3774981 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3774981 00:25:43.529 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3774981 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.788 19:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:45.693 19:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:45.953 19:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:48.490 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:48.490 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:48.491 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:48.752 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:50.130 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:50.130 19:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rZZ /tmp/spdk.key-null.Rpj /tmp/spdk.key-sha256.Qd6 /tmp/spdk.key-sha384.yRt /tmp/spdk.key-sha512.rTZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:50.130 19:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:53.420 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:53.420 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:53.420 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:53.420 00:25:53.420 real 0m54.462s 00:25:53.420 user 0m48.632s 00:25:53.420 sys 0m12.676s 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.421 ************************************ 00:25:53.421 END TEST nvmf_auth_host 00:25:53.421 ************************************ 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.421 ************************************ 00:25:53.421 START TEST nvmf_digest 00:25:53.421 ************************************ 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:53.421 * Looking for test storage... 00:25:53.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.421 --rc genhtml_branch_coverage=1 00:25:53.421 --rc genhtml_function_coverage=1 00:25:53.421 --rc genhtml_legend=1 00:25:53.421 --rc geninfo_all_blocks=1 00:25:53.421 --rc geninfo_unexecuted_blocks=1 00:25:53.421 00:25:53.421 ' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.421 --rc genhtml_branch_coverage=1 00:25:53.421 --rc genhtml_function_coverage=1 00:25:53.421 --rc genhtml_legend=1 00:25:53.421 --rc geninfo_all_blocks=1 00:25:53.421 --rc geninfo_unexecuted_blocks=1 00:25:53.421 00:25:53.421 ' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.421 --rc genhtml_branch_coverage=1 00:25:53.421 --rc genhtml_function_coverage=1 00:25:53.421 --rc genhtml_legend=1 00:25:53.421 --rc geninfo_all_blocks=1 00:25:53.421 --rc geninfo_unexecuted_blocks=1 00:25:53.421 00:25:53.421 ' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.421 --rc genhtml_branch_coverage=1 00:25:53.421 --rc genhtml_function_coverage=1 00:25:53.421 --rc genhtml_legend=1 00:25:53.421 --rc geninfo_all_blocks=1 00:25:53.421 --rc geninfo_unexecuted_blocks=1 00:25:53.421 00:25:53.421 ' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.421 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.422 19:03:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:59.995 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:59.995 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:59.995 Found net devices under 0000:86:00.0: cvl_0_0 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:59.995 Found net devices under 0000:86:00.1: cvl_0_1 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.995 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:25:59.996 00:25:59.996 --- 10.0.0.2 ping statistics --- 00:25:59.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.996 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:25:59.996 00:25:59.996 --- 10.0.0.1 ping statistics --- 00:25:59.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.996 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 ************************************ 00:25:59.996 START TEST nvmf_digest_clean 00:25:59.996 ************************************ 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3789257 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3789257 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3789257 ']' 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 [2024-11-20 19:03:21.570715] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:25:59.996 [2024-11-20 19:03:21.570755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.996 [2024-11-20 19:03:21.631313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.996 [2024-11-20 19:03:21.672402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.996 [2024-11-20 19:03:21.672435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.996 [2024-11-20 19:03:21.672443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.996 [2024-11-20 19:03:21.672449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.996 [2024-11-20 19:03:21.672454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.996 [2024-11-20 19:03:21.673003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:59.996 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.997 null0 00:25:59.997 [2024-11-20 19:03:21.853643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.997 [2024-11-20 19:03:21.877847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3789277 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3789277 /var/tmp/bperf.sock 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3789277 ']' 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.997 19:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.997 [2024-11-20 19:03:21.930746] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:25:59.997 [2024-11-20 19:03:21.930785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789277 ] 00:25:59.997 [2024-11-20 19:03:22.004946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.997 [2024-11-20 19:03:22.045650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.997 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.997 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:59.997 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:59.997 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:59.997 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:00.256 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.256 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.515 nvme0n1 00:26:00.515 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:00.515 19:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.515 Running I/O for 2 seconds... 00:26:02.829 25735.00 IOPS, 100.53 MiB/s [2024-11-20T18:03:25.154Z] 25450.50 IOPS, 99.42 MiB/s 00:26:02.829 Latency(us) 00:26:02.829 [2024-11-20T18:03:25.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:02.829 nvme0n1 : 2.04 24975.87 97.56 0.00 0.00 5021.16 2512.21 43690.67 00:26:02.829 [2024-11-20T18:03:25.154Z] =================================================================================================================== 00:26:02.829 [2024-11-20T18:03:25.154Z] Total : 24975.87 97.56 0.00 0.00 5021.16 2512.21 43690.67 00:26:02.829 { 00:26:02.829 "results": [ 00:26:02.829 { 00:26:02.829 "job": "nvme0n1", 00:26:02.829 "core_mask": "0x2", 00:26:02.830 "workload": "randread", 00:26:02.830 "status": "finished", 00:26:02.830 "queue_depth": 128, 00:26:02.830 "io_size": 4096, 00:26:02.830 "runtime": 2.043132, 00:26:02.830 "iops": 24975.870379397904, 00:26:02.830 "mibps": 97.56199366952306, 00:26:02.830 "io_failed": 0, 00:26:02.830 "io_timeout": 0, 00:26:02.830 "avg_latency_us": 5021.156315633781, 00:26:02.830 "min_latency_us": 2512.213333333333, 00:26:02.830 "max_latency_us": 43690.666666666664 00:26:02.830 } 00:26:02.830 ], 00:26:02.830 "core_count": 1 00:26:02.830 } 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:02.830 | select(.opcode=="crc32c") 00:26:02.830 | "\(.module_name) \(.executed)"' 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3789277 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3789277 ']' 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3789277 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.830 19:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3789277 00:26:02.830 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.830 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.830 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3789277' 00:26:02.830 killing process with pid 3789277 00:26:02.830 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3789277 00:26:02.830 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.830 00:26:02.830 Latency(us) 00:26:02.830 [2024-11-20T18:03:25.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.830 [2024-11-20T18:03:25.155Z] =================================================================================================================== 00:26:02.830 [2024-11-20T18:03:25.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.830 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3789277 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3789753 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3789753 /var/tmp/bperf.sock 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3789753 ']' 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.089 [2024-11-20 19:03:25.246466] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:03.089 [2024-11-20 19:03:25.246512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789753 ] 00:26:03.089 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.089 Zero copy mechanism will not be used. 00:26:03.089 [2024-11-20 19:03:25.317228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.089 [2024-11-20 19:03:25.358682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:03.089 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.349 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.349 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.917 nvme0n1 00:26:03.917 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:03.917 19:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.917 Zero copy mechanism will not be used. 00:26:03.917 Running I/O for 2 seconds... 00:26:05.872 5808.00 IOPS, 726.00 MiB/s [2024-11-20T18:03:28.197Z] 5720.00 IOPS, 715.00 MiB/s 00:26:05.872 Latency(us) 00:26:05.872 [2024-11-20T18:03:28.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.872 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:05.872 nvme0n1 : 2.00 5719.26 714.91 0.00 0.00 2794.81 620.25 10360.93 00:26:05.872 [2024-11-20T18:03:28.197Z] =================================================================================================================== 00:26:05.872 [2024-11-20T18:03:28.197Z] Total : 5719.26 714.91 0.00 0.00 2794.81 620.25 10360.93 00:26:05.872 { 00:26:05.872 "results": [ 00:26:05.872 { 00:26:05.872 "job": "nvme0n1", 00:26:05.872 "core_mask": "0x2", 00:26:05.872 "workload": "randread", 00:26:05.872 "status": "finished", 00:26:05.872 "queue_depth": 16, 00:26:05.872 "io_size": 131072, 00:26:05.872 "runtime": 2.003058, 00:26:05.872 "iops": 5719.255258709433, 00:26:05.872 "mibps": 714.9069073386792, 00:26:05.872 "io_failed": 0, 00:26:05.872 "io_timeout": 0, 00:26:05.872 "avg_latency_us": 2794.807810587922, 00:26:05.872 "min_latency_us": 620.2514285714286, 00:26:05.872 "max_latency_us": 10360.929523809524 00:26:05.872 } 00:26:05.872 ], 00:26:05.872 "core_count": 1 00:26:05.872 } 00:26:05.872 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:05.872 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:05.872 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:05.872 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:05.872 | select(.opcode=="crc32c") 00:26:05.872 | "\(.module_name) \(.executed)"' 00:26:05.872 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3789753 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3789753 ']' 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3789753 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3789753 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3789753' 00:26:06.132 killing process with pid 3789753 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3789753 00:26:06.132 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.132 00:26:06.132 Latency(us) 00:26:06.132 [2024-11-20T18:03:28.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.132 [2024-11-20T18:03:28.457Z] =================================================================================================================== 00:26:06.132 [2024-11-20T18:03:28.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.132 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3789753 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3790398 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3790398 /var/tmp/bperf.sock 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3790398 ']' 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 [2024-11-20 19:03:28.549535] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:06.392 [2024-11-20 19:03:28.549583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790398 ] 00:26:06.392 [2024-11-20 19:03:28.623037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.392 [2024-11-20 19:03:28.664727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.392 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.652 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.652 19:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.219 nvme0n1 00:26:07.219 19:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:07.219 19:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:07.219 Running I/O for 2 seconds... 00:26:09.553 27976.00 IOPS, 109.28 MiB/s [2024-11-20T18:03:31.878Z] 28250.00 IOPS, 110.35 MiB/s 00:26:09.553 Latency(us) 00:26:09.553 [2024-11-20T18:03:31.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.553 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:09.553 nvme0n1 : 2.00 28271.51 110.44 0.00 0.00 4523.38 2215.74 10048.85 00:26:09.553 [2024-11-20T18:03:31.878Z] =================================================================================================================== 00:26:09.553 [2024-11-20T18:03:31.878Z] Total : 28271.51 110.44 0.00 0.00 4523.38 2215.74 10048.85 00:26:09.553 { 00:26:09.553 "results": [ 00:26:09.553 { 00:26:09.553 "job": "nvme0n1", 00:26:09.553 "core_mask": "0x2", 00:26:09.553 "workload": "randwrite", 00:26:09.553 "status": "finished", 00:26:09.553 "queue_depth": 128, 00:26:09.553 "io_size": 4096, 00:26:09.553 "runtime": 2.003006, 00:26:09.553 "iops": 28271.50792359084, 00:26:09.553 "mibps": 110.43557782652672, 00:26:09.553 "io_failed": 0, 00:26:09.553 "io_timeout": 0, 00:26:09.553 "avg_latency_us": 4523.375138531502, 00:26:09.554 "min_latency_us": 2215.7409523809524, 00:26:09.554 "max_latency_us": 10048.853333333333 00:26:09.554 } 00:26:09.554 ], 00:26:09.554 "core_count": 1 00:26:09.554 } 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.554 | select(.opcode=="crc32c") 00:26:09.554 | "\(.module_name) \(.executed)"' 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3790398 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3790398 ']' 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3790398 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790398 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790398' 00:26:09.554 killing process with pid 3790398 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3790398 00:26:09.554 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.554 00:26:09.554 Latency(us) 00:26:09.554 [2024-11-20T18:03:31.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.554 [2024-11-20T18:03:31.879Z] =================================================================================================================== 00:26:09.554 [2024-11-20T18:03:31.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.554 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3790398 00:26:09.813 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:09.813 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3790920 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3790920 /var/tmp/bperf.sock 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3790920 ']' 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.814 19:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.814 [2024-11-20 19:03:31.944656] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:09.814 [2024-11-20 19:03:31.944702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790920 ] 00:26:09.814 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.814 Zero copy mechanism will not be used. 00:26:09.814 [2024-11-20 19:03:32.018697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.814 [2024-11-20 19:03:32.060090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.814 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.814 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:09.814 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.814 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.814 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.073 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.073 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.640 nvme0n1 00:26:10.640 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.640 19:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.640 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.640 Zero copy mechanism will not be used. 00:26:10.640 Running I/O for 2 seconds... 00:26:12.954 5907.00 IOPS, 738.38 MiB/s [2024-11-20T18:03:35.279Z] 6359.50 IOPS, 794.94 MiB/s 00:26:12.954 Latency(us) 00:26:12.954 [2024-11-20T18:03:35.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.954 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:12.954 nvme0n1 : 2.00 6359.42 794.93 0.00 0.00 2511.89 1497.97 11734.06 00:26:12.954 [2024-11-20T18:03:35.279Z] =================================================================================================================== 00:26:12.954 [2024-11-20T18:03:35.279Z] Total : 6359.42 794.93 0.00 0.00 2511.89 1497.97 11734.06 00:26:12.954 { 00:26:12.954 "results": [ 00:26:12.954 { 00:26:12.954 "job": "nvme0n1", 00:26:12.954 "core_mask": "0x2", 00:26:12.954 "workload": "randwrite", 00:26:12.954 "status": "finished", 00:26:12.954 "queue_depth": 16, 00:26:12.954 "io_size": 131072, 00:26:12.954 "runtime": 2.003169, 00:26:12.954 "iops": 6359.423493474589, 00:26:12.954 "mibps": 794.9279366843236, 00:26:12.954 "io_failed": 0, 00:26:12.954 "io_timeout": 0, 00:26:12.954 "avg_latency_us": 2511.885370384907, 00:26:12.954 "min_latency_us": 1497.9657142857143, 00:26:12.954 "max_latency_us": 11734.064761904761 00:26:12.954 } 00:26:12.954 ], 00:26:12.954 "core_count": 1 00:26:12.954 } 00:26:12.954 19:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.954 19:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.954 19:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.954 19:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.954 | select(.opcode=="crc32c") 00:26:12.954 | "\(.module_name) \(.executed)"' 00:26:12.955 19:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3790920 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3790920 ']' 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3790920 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790920 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790920' 00:26:12.955 killing process with pid 3790920 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3790920 00:26:12.955 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.955 00:26:12.955 Latency(us) 00:26:12.955 [2024-11-20T18:03:35.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.955 [2024-11-20T18:03:35.280Z] =================================================================================================================== 00:26:12.955 [2024-11-20T18:03:35.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.955 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3790920 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3789257 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3789257 ']' 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3789257 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3789257 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3789257' 00:26:13.213 killing process with pid 3789257 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3789257 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3789257 00:26:13.213 00:26:13.213 real 0m14.018s 00:26:13.213 user 0m26.794s 00:26:13.213 sys 0m4.526s 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.213 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.213 ************************************ 00:26:13.213 END TEST nvmf_digest_clean 00:26:13.213 ************************************ 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:13.471 ************************************ 00:26:13.471 START TEST nvmf_digest_error 00:26:13.471 ************************************ 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3791581 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3791581 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3791581 ']' 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.471 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.471 [2024-11-20 19:03:35.655230] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:13.471 [2024-11-20 19:03:35.655276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.471 [2024-11-20 19:03:35.736321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.471 [2024-11-20 19:03:35.776537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.471 [2024-11-20 19:03:35.776569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.471 [2024-11-20 19:03:35.776577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.471 [2024-11-20 19:03:35.776583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.471 [2024-11-20 19:03:35.776588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.471 [2024-11-20 19:03:35.777123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.728 [2024-11-20 19:03:35.853581] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.728 null0 00:26:13.728 [2024-11-20 19:03:35.948412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.728 [2024-11-20 19:03:35.972605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3791674 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3791674 /var/tmp/bperf.sock 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3791674 ']' 00:26:13.728 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.729 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.729 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.729 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.729 19:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.729 [2024-11-20 19:03:36.026944] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:13.729 [2024-11-20 19:03:36.026985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3791674 ] 00:26:13.986 [2024-11-20 19:03:36.101369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.986 [2024-11-20 19:03:36.143619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.986 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.986 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:13.986 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:13.986 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.243 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.243 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.243 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.243 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.243 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.243 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.501 nvme0n1 00:26:14.501 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.501 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.501 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.501 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.501 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.501 19:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.501 Running I/O for 2 seconds... 00:26:14.501 [2024-11-20 19:03:36.824175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.501 [2024-11-20 19:03:36.824212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.501 [2024-11-20 19:03:36.824227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.834058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.834082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.834091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.844868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.844889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.844898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.853438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.853459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.853468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.862981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.863002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.863010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.873175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.873195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.873209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.881529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.881549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.881557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.890817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.890838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.890846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.900794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.900814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.900823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.909600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.909621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.909628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.921498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.921518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.921527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.933743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.933763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.933771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.941605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.941625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.941633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.953307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.953329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.953336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.963677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.759 [2024-11-20 19:03:36.963697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.759 [2024-11-20 19:03:36.963705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.759 [2024-11-20 19:03:36.974575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:36.974596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:36.974605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:36.984659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:36.984680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:36.984688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:36.993589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:36.993610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:36.993622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.003089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.003109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.003118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.016187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.016213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.016221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.025211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.025232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.025240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.033784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.033805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.033812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.042499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.042520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.042527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.053315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.053335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.053343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.062027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.062047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.062055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.070701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.070722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.070730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.760 [2024-11-20 19:03:37.080855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:14.760 [2024-11-20 19:03:37.080878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.760 [2024-11-20 19:03:37.080886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.093245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.093265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.093273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.105971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.105991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.105999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.117032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.117052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.117060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.125723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.125743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.125751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.137594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.137615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.137623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.149118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.149139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.149147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.159914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.159935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.159942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.168164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.018 [2024-11-20 19:03:37.168183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.018 [2024-11-20 19:03:37.168191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.018 [2024-11-20 19:03:37.178762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.178782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.178790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.189512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.189531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.189539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.197707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.197727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.197735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.207175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.207195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.207208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.217419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.217438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.217446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.227790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.227809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.227817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.238102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.238122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.238131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.247448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.247468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.247476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.258930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.258950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.258965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.270014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.270034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.270041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.278172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.278191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.278199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.288296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.288316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.288324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.296845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.296865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.296872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.308367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.308387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.308395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.321004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.321024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.321031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.331455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.331474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.331482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.019 [2024-11-20 19:03:37.339805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.019 [2024-11-20 19:03:37.339826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.019 [2024-11-20 19:03:37.339834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.277 [2024-11-20 19:03:37.350778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.277 [2024-11-20 19:03:37.350801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.277 [2024-11-20 19:03:37.350809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.277 [2024-11-20 19:03:37.362433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.277 [2024-11-20 19:03:37.362452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.277 [2024-11-20 19:03:37.362460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.277 [2024-11-20 19:03:37.372492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.277 [2024-11-20 19:03:37.372512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.277 [2024-11-20 19:03:37.372520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.277 [2024-11-20 19:03:37.382008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.277 [2024-11-20 19:03:37.382027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.277 [2024-11-20 19:03:37.382035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.277 [2024-11-20 19:03:37.391344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.277 [2024-11-20 19:03:37.391363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.277 [2024-11-20 19:03:37.391371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.277 [2024-11-20 19:03:37.399824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.277 [2024-11-20 19:03:37.399844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.277 [2024-11-20 19:03:37.399852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.409374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.409393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.409401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.418707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.418726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.418733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.426666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.426685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.426693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.437896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.437916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.437924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.446879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.446898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.446906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.455027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.455046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.455054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.464844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.464864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.464872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.474469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.474488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.474496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.483691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.483711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.483719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.492255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.492275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.492283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.502473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.502494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.502501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.511020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.511043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.511051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.521623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.521643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.521651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.531460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.531480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.531488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.540053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.540072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.540080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.549698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.549718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.549725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.559217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.559236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.559244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.568540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.568560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.568567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.577844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.577863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.577871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.587017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.587036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.587043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.278 [2024-11-20 19:03:37.596264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.278 [2024-11-20 19:03:37.596283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.278 [2024-11-20 19:03:37.596291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.536 [2024-11-20 19:03:37.606115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.536 [2024-11-20 19:03:37.606135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.536 [2024-11-20 19:03:37.606143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.536 [2024-11-20 19:03:37.616227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.536 [2024-11-20 19:03:37.616247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.616254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.625001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.625020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.625027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.634235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.634255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.634263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.643937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.643956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.643964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.653485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.653504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.653512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.663589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.663609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.663616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.671768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.671787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.671799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.682988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.683007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.683015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.692806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.692826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.692833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.700742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.700761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.700769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.710381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.710401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.710409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.721462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.721482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.721490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.730689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.730708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.730717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.739843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.739863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.739871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.749442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.749462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.749469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.757996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.758018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.758026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.767220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.767239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.767247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.775910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.775930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.775938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.786148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.786170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.786179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.795704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.795726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.795733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.804667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.804687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.804695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 25885.00 IOPS, 101.11 MiB/s [2024-11-20T18:03:37.862Z] [2024-11-20 19:03:37.813471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.813500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.813508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.825003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.825023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.825031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.834550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.834571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.834582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.845470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.845489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.845497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.537 [2024-11-20 19:03:37.855384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.537 [2024-11-20 19:03:37.855404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.537 [2024-11-20 19:03:37.855411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.796 [2024-11-20 19:03:37.863965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.796 [2024-11-20 19:03:37.863984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.863991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.876560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.876580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.876587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.888601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.888621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.888628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.896869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.896889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.896896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.907976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.907996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.908004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.918366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.918386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.918393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.929052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.929075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.929083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.940628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.940647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.940655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.949742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.949761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.949768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.961530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.961550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.961558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.972525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.972545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.972552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.980693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.980712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.980720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:37.991596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:37.991615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:37.991623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.000937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.000956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.000965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.009154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.009173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.009181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.019252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.019273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.019281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.027888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.027907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.027915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.037734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.037754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.037762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.048671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.048691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.048700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.057024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.057044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.057052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.067098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.067118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.067127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.075531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.075551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.075559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.797 [2024-11-20 19:03:38.084592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.797 [2024-11-20 19:03:38.084611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.797 [2024-11-20 19:03:38.084619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.798 [2024-11-20 19:03:38.094375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.798 [2024-11-20 19:03:38.094394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.798 [2024-11-20 19:03:38.094404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.798 [2024-11-20 19:03:38.103883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.798 [2024-11-20 19:03:38.103902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.798 [2024-11-20 19:03:38.103910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.798 [2024-11-20 19:03:38.115655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:15.798 [2024-11-20 19:03:38.115675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.798 [2024-11-20 19:03:38.115683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.057 [2024-11-20 19:03:38.126112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.057 [2024-11-20 19:03:38.126132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.057 [2024-11-20 19:03:38.126140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.057 [2024-11-20 19:03:38.134986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.057 [2024-11-20 19:03:38.135005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.057 [2024-11-20 19:03:38.135012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.057 [2024-11-20 19:03:38.146444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.057 [2024-11-20 19:03:38.146463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.057 [2024-11-20 19:03:38.146472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.057 [2024-11-20 19:03:38.158687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.057 [2024-11-20 19:03:38.158707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.057 [2024-11-20 19:03:38.158715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.057 [2024-11-20 19:03:38.169550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.057 [2024-11-20 19:03:38.169571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.057 [2024-11-20 19:03:38.169579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.057 [2024-11-20 19:03:38.179645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.057 [2024-11-20 19:03:38.179666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.179673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.188417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.188444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.188452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.199287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.199309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.199318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.208843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.208866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.208875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.217039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.217061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.217069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.226431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.226452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.226459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.235968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.235988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.235996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.245379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.245399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.245406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.256425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.256444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.256452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.267605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.267624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.267632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.276192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.276218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.276227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.286428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.286448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.286456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.296752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.296773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.296781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.305083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.305104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.305112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.316254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.316274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.316281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.326896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.326915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.326922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.334620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.334639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.334647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.345015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.345035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.345043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.354227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.354250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.354258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.364728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.364747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.364755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.058 [2024-11-20 19:03:38.372854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.058 [2024-11-20 19:03:38.372873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.058 [2024-11-20 19:03:38.372881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.383573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.383594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.383602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.394756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.394777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.394785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.404025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.404045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.404053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.413144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.413163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.413171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.421523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.421543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.421551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.431590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.431610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.431618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.441377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.441396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.441404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.454367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.454388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.454396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.462227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.462248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.462255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.472534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.472555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.318 [2024-11-20 19:03:38.472562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.318 [2024-11-20 19:03:38.482767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.318 [2024-11-20 19:03:38.482786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.482794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.492563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.492583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.492591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.501726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.501747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.501755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.511317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.511337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.511345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.521265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.521285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.521298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.531088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.531109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.531116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.540061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.540081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.540089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.549377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.549396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.549403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.558409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.558429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.558437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.570482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.570503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.570511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.581104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.581124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.581132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.591949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.591969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.591977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.600374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.600394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.600401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.609390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.609413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.609421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.618765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.618785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.618793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.628092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.628111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.628119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.319 [2024-11-20 19:03:38.637058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.319 [2024-11-20 19:03:38.637077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.319 [2024-11-20 19:03:38.637084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.647176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.647197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.647210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.657216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.657236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.657244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.666134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.666153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.666161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.675754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.675773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.675780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.684977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.684997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.685004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.694286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.694306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.694313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.703438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.703457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.703465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.714833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.714853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.714861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.727401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.727420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.727428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.735357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.735378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.735385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.746773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.746793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.746801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.759269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.759288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.759296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.771836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.771856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.771864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.783290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.783309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.791879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.791898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.791906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 [2024-11-20 19:03:38.804126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.804146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.804154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 25747.00 IOPS, 100.57 MiB/s [2024-11-20T18:03:38.904Z] [2024-11-20 19:03:38.814845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104b740) 00:26:16.579 [2024-11-20 19:03:38.814865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.579 [2024-11-20 19:03:38.814873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.579 00:26:16.579 Latency(us) 00:26:16.579 [2024-11-20T18:03:38.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.579 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.580 nvme0n1 : 2.01 25744.17 100.56 0.00 0.00 4966.96 2496.61 17226.61 00:26:16.580 [2024-11-20T18:03:38.905Z] =================================================================================================================== 00:26:16.580 [2024-11-20T18:03:38.905Z] Total : 25744.17 100.56 0.00 0.00 4966.96 2496.61 17226.61 00:26:16.580 { 00:26:16.580 "results": [ 00:26:16.580 { 00:26:16.580 "job": "nvme0n1", 00:26:16.580 "core_mask": "0x2", 00:26:16.580 "workload": "randread", 00:26:16.580 "status": "finished", 00:26:16.580 "queue_depth": 128, 00:26:16.580 "io_size": 4096, 00:26:16.580 "runtime": 2.005192, 00:26:16.580 "iops": 25744.16813950983, 00:26:16.580 "mibps": 100.56315679496028, 00:26:16.580 "io_failed": 0, 00:26:16.580 "io_timeout": 0, 00:26:16.580 "avg_latency_us": 4966.9568767838, 00:26:16.580 "min_latency_us": 2496.609523809524, 00:26:16.580 "max_latency_us": 17226.605714285713 00:26:16.580 } 00:26:16.580 ], 00:26:16.580 "core_count": 1 00:26:16.580 } 00:26:16.580 19:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.580 19:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.580 19:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.580 | .driver_specific 00:26:16.580 | .nvme_error 00:26:16.580 | .status_code 00:26:16.580 | .command_transient_transport_error' 00:26:16.580 19:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3791674 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3791674 ']' 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3791674 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3791674 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3791674' 00:26:16.839 killing process with pid 3791674 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3791674 00:26:16.839 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.839 00:26:16.839 Latency(us) 00:26:16.839 [2024-11-20T18:03:39.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.839 [2024-11-20T18:03:39.164Z] =================================================================================================================== 00:26:16.839 [2024-11-20T18:03:39.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.839 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3791674 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3792148 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3792148 /var/tmp/bperf.sock 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3792148 ']' 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.098 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.098 [2024-11-20 19:03:39.291556] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:17.098 [2024-11-20 19:03:39.291606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792148 ] 00:26:17.098 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.098 Zero copy mechanism will not be used. 00:26:17.098 [2024-11-20 19:03:39.366023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.098 [2024-11-20 19:03:39.402782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.356 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.356 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:17.356 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.356 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.615 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.615 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.615 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.615 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.615 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.615 19:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.874 nvme0n1 00:26:17.874 19:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:17.874 19:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.874 19:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.874 19:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.874 19:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:17.874 19:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.874 Zero copy mechanism will not be used. 00:26:17.874 Running I/O for 2 seconds... 00:26:17.874 [2024-11-20 19:03:40.160365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.160401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.160411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.874 [2024-11-20 19:03:40.165583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.165608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.165616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.874 [2024-11-20 19:03:40.170825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.170847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.170856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.874 [2024-11-20 19:03:40.176081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.176103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.176111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.874 [2024-11-20 19:03:40.181346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.181368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.181376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.874 [2024-11-20 19:03:40.186617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.186638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.186646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.874 [2024-11-20 19:03:40.191982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.192004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.192011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.874 [2024-11-20 19:03:40.197456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:17.874 [2024-11-20 19:03:40.197478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.874 [2024-11-20 19:03:40.197487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.134 [2024-11-20 19:03:40.202926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.134 [2024-11-20 19:03:40.202948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.134 [2024-11-20 19:03:40.202956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.134 [2024-11-20 19:03:40.208351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.134 [2024-11-20 19:03:40.208373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.134 [2024-11-20 19:03:40.208381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.134 [2024-11-20 19:03:40.213572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.134 [2024-11-20 19:03:40.213593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.134 [2024-11-20 19:03:40.213601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.134 [2024-11-20 19:03:40.218885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.134 [2024-11-20 19:03:40.218905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.134 [2024-11-20 19:03:40.218914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.134 [2024-11-20 19:03:40.224176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.134 [2024-11-20 19:03:40.224197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.134 [2024-11-20 19:03:40.224215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.134 [2024-11-20 19:03:40.229481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.134 [2024-11-20 19:03:40.229502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.134 [2024-11-20 19:03:40.229511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.234879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.234900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.234908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.240430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.240452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.240460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.245800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.245821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.245829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.251151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.251171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.251179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.256488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.256510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.256518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.261812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.261833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.261841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.267127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.267148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.267156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.272469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.272494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.272501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.277788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.277809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.277817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.283105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.283127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.283135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.288388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.288408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.288416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.293680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.293700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.293708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.298969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.298990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.298998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.304265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.304286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.304293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.309536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.309557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.309565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.314776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.314796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.314804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.320056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.320076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.320084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.325476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.325504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.330803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.330823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.330831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.336074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.336095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.336103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.341276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.341297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.341304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.346552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.346573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.346580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.351864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.351884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.351892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.357134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.357154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.357161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.362371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.362392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.362404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.368477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.368499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.368507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.375850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.135 [2024-11-20 19:03:40.375872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.135 [2024-11-20 19:03:40.375880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.135 [2024-11-20 19:03:40.383325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.383347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.383355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.390639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.390662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.390671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.398255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.398277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.398285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.405733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.405754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.405762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.413118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.413139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.413147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.420940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.420963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.420971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.428773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.428803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.428811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.436235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.436257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.436265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.444312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.444334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.444342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.136 [2024-11-20 19:03:40.452389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.136 [2024-11-20 19:03:40.452411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.136 [2024-11-20 19:03:40.452419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.460287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.460310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.460319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.467940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.467961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.467970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.475403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.475425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.475433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.482182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.482209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.482218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.487632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.487653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.487665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.492916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.492938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.492947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.498274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.498296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.498304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.503649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.503670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.503678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.508935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.508956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.508964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.514293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.514314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.514322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.519622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.519643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.396 [2024-11-20 19:03:40.519651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.396 [2024-11-20 19:03:40.525014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.396 [2024-11-20 19:03:40.525035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.525043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.530346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.530366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.530374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.535731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.535755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.535763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.541058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.541078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.541086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.546350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.546371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.546379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.551726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.551746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.551754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.555359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.555379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.555386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.560303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.560324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.560332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.566243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.566264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.566272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.571994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.572015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.572023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.577409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.577431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.577439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.582766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.582786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.582794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.588076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.588096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.588104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.593406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.593427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.593435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.598831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.598852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.598860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.604429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.604450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.604459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.609827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.609848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.609856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.615351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.615372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.615379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.620712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.620732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.620741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.626034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.626055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.626066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.631371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.631391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.631399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.636717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.636737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.636745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.642102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.642124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.642131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.647133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.647154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.647161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.652423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.652443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.652452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.657646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.657666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.657675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.662830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.662850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.397 [2024-11-20 19:03:40.662858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.397 [2024-11-20 19:03:40.668063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.397 [2024-11-20 19:03:40.668084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.668091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.673514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.673539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.673548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.679104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.679125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.679134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.684512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.684533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.684540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.689962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.689982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.689990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.695456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.695477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.695484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.700873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.700893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.700900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.706231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.706252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.706259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.711644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.711665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.711672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.398 [2024-11-20 19:03:40.717072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.398 [2024-11-20 19:03:40.717093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.398 [2024-11-20 19:03:40.717101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.722629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.722650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.722657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.728251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.728271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.728279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.733755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.733775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.733783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.739232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.739252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.739260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.744675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.744696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.744704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.750149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.750170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.750178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.755552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.755572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.755580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.760789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.760810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.760817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.766216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.766236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.766247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.771600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.771621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.771629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.777110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.777130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.777138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.782700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.782721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.782729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.788115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.788135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.788143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.793772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.793793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.793801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.800089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.800109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.800118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.805553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.805574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.805582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.810842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.810870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.816354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.816378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.816386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.821392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.821413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.826786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.826807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.826816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.832005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.832026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.832036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.837250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.837271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.837280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.842683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.842704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.842714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.848044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.848065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.848074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.853374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.853396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.659 [2024-11-20 19:03:40.853405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.659 [2024-11-20 19:03:40.858673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.659 [2024-11-20 19:03:40.858696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.858705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.864020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.864041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.864050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.869381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.869403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.869413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.874581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.874605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.874614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.879830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.879853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.879863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.885070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.885092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.885102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.890512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.890535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.890545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.896322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.896343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.896353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.901736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.901757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.901766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.907104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.907130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.907143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.912509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.912531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.912540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.918069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.918091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.918101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.923408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.923430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.923440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.928820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.928841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.928852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.931799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.931820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.931830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.937378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.937398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.937407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.942810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.942830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.942840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.948191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.948217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.948228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.955164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.955187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.955198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.961812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.961833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.961844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.969768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.969791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.969801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.660 [2024-11-20 19:03:40.977127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.660 [2024-11-20 19:03:40.977148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.660 [2024-11-20 19:03:40.977159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:40.983780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:40.983803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:40.983813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:40.990828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:40.990851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:40.990861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:40.997483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:40.997504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:40.997514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.004784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.004807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.004818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.012812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.012835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.012849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.021893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.021917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.021928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.030696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.030719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.030729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.038464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.038487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.038497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.046052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.046074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.046085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.052028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.052061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.057448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.057468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.057478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.062735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.062756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.062768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.068019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.068042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.068052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.073261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.073287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.073298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.078644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.078665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.078675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.083897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.083920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.083930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.089321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.089343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.089353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.094665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.094689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.094699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.100212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.100235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.100246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.105735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.105761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.105773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.111360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.111382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.111392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.116844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.116866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.116877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.122361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.122384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.122396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.127915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.127936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.127946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.133380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.921 [2024-11-20 19:03:41.133400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.921 [2024-11-20 19:03:41.133409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.921 [2024-11-20 19:03:41.138455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.138476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.138485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.143906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.143927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.143936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.149385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.149408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.149419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.922 5384.00 IOPS, 673.00 MiB/s [2024-11-20T18:03:41.247Z] [2024-11-20 19:03:41.156386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.156410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.156422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.161833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.161855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.161863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.167128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.167150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.167166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.172499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.172522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.172530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.177962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.177985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.177993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.183148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.183170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.183178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.188359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.188381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.188389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.193446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.193467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.193475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.198574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.198596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.198604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.203793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.203815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.203824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.208982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.209004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.209012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.214184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.214217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.214227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.219385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.219407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.219415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.224599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.224622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.224630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.229774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.229796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.229804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.235113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.235135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.235143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.922 [2024-11-20 19:03:41.240326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:18.922 [2024-11-20 19:03:41.240348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.922 [2024-11-20 19:03:41.240356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.182 [2024-11-20 19:03:41.245646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.182 [2024-11-20 19:03:41.245668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.182 [2024-11-20 19:03:41.245677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.251481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.251503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.251511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.257435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.257459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.257467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.262734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.262756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.262764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.267948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.267971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.267979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.273373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.273396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.273405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.278753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.278776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.278785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.284130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.284152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.284161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.289459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.289482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.289490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.294899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.294922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.294930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.300184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.300214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.300223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.305391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.305413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.305427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.310703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.310725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.310733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.315986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.316008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.316016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.321294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.321316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.321323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.326636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.326659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.326667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.332092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.332115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.332123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.337583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.337606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.337614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.342962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.342986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.342994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.348331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.348352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.348360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.353646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.353668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.353676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.359043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.359065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.359073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.364360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.364381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.364390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.369543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.369565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.369573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.374670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.374692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.374700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.379838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.379859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.379867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.384991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.385013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.183 [2024-11-20 19:03:41.385021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.183 [2024-11-20 19:03:41.390076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.183 [2024-11-20 19:03:41.390098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.390106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.395468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.395490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.395501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.401573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.401595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.401604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.407537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.407561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.407570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.414782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.414805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.414814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.422588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.422612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.422621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.430210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.430233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.430241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.439258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.439281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.439290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.447388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.447412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.447420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.455665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.455689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.455698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.463471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.463499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.463507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.471938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.471962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.471970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.480497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.480521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.480529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.489027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.489051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.489060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.497225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.497248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.497256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.184 [2024-11-20 19:03:41.505527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.184 [2024-11-20 19:03:41.505549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.184 [2024-11-20 19:03:41.505558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.513705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.513728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.513737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.521560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.521583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.521592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.530368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.530391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.530400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.538514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.538546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.546432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.546456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.546465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.552609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.552632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.559913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.559936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.566749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.566771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.444 [2024-11-20 19:03:41.566780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.444 [2024-11-20 19:03:41.572170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.444 [2024-11-20 19:03:41.572191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.572200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.577690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.577712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.577720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.583062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.583084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.583092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.588296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.588317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.588328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.593734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.593756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.593765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.599146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.599168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.599176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.604531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.604553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.604561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.609518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.609540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.609548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.614635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.614658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.614666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.619882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.619904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.619912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.625047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.625069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.625076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.630186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.630215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.630223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.635439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.635464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.635472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.640489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.640510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.640518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.645938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.645960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.645969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.651910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.651931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.651939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.658533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.658555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.658563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.665031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.665054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.665063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.671870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.671892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.671900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.676659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.676680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.676688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.682713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.682736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.682744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.689945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.689968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.689977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.697038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.697061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.697069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.705466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.705489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.705498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.712026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.712049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.712058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.717056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.717079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.717087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.722962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.722983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.445 [2024-11-20 19:03:41.722992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.445 [2024-11-20 19:03:41.728329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.445 [2024-11-20 19:03:41.728351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.728361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.446 [2024-11-20 19:03:41.733538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.446 [2024-11-20 19:03:41.733560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.733570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.446 [2024-11-20 19:03:41.738763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.446 [2024-11-20 19:03:41.738787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.738800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.446 [2024-11-20 19:03:41.743981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.446 [2024-11-20 19:03:41.744003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.744011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.446 [2024-11-20 19:03:41.749164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.446 [2024-11-20 19:03:41.749185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.749193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.446 [2024-11-20 19:03:41.754549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.446 [2024-11-20 19:03:41.754572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.754580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.446 [2024-11-20 19:03:41.760898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.446 [2024-11-20 19:03:41.760921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.446 [2024-11-20 19:03:41.767976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.446 [2024-11-20 19:03:41.767998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.446 [2024-11-20 19:03:41.768007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.773925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.773947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.773955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.779378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.779401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.779410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.785413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.785433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.785441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.790692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.790719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.790727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.796110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.796132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.796140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.801363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.801384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.801393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.806649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.806670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.806678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.811946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.811968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.811976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.817119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.817140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.817148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.822343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.822365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.822373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.827533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.827554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.827562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.832806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.832827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.832835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.838040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.838062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.838069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.843309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.843331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.843339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.848563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.706 [2024-11-20 19:03:41.848584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.706 [2024-11-20 19:03:41.848592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.706 [2024-11-20 19:03:41.853631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.853654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.853662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.859111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.859135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.859143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.864462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.864484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.864492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.869778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.869800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.869809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.876582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.876604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.876612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.882011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.882033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.882045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.887287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.887309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.887318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.892752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.892774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.892782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.898380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.898401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.898409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.904401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.904425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.904433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.911064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.911087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.911096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.918819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.918843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.918852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.926489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.926512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.926520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.934603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.934627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.934635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.943039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.943063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.943071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.951045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.951066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.951075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.958901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.958924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.958933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.966313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.966336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.966344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.973611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.973635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.973644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.981555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.981577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.981585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.989607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.989630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.989639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:41.997455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:41.997476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:41.997485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:42.004835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:42.004858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:42.004870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:42.013496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:42.013519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:42.013528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:42.020750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:42.020772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:42.020781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.707 [2024-11-20 19:03:42.027258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.707 [2024-11-20 19:03:42.027282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.707 [2024-11-20 19:03:42.027291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.032890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.032912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.032921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.038786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.038809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.038819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.044141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.044162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.044170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.049374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.049395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.049404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.054633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.054654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.054662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.059901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.059926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.059934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.065093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.065115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.065123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.070292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.070313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.070321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.075482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.075504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.967 [2024-11-20 19:03:42.075512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.967 [2024-11-20 19:03:42.080693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.967 [2024-11-20 19:03:42.080715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.080723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.085892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.085912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.085920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.091049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.091070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.091078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.096200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.096230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.096237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.101317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.101339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.101347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.106417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.106439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.106447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.111577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.111599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.111607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.116762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.116784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.116792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.121944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.121965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.121973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.127090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.127111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.127118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.132268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.132289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.132296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.137440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.137460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.137468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.142626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.142648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.142655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.147765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.147786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.147798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.968 [2024-11-20 19:03:42.152985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.153006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.153014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.968 5300.50 IOPS, 662.56 MiB/s [2024-11-20T18:03:42.293Z] [2024-11-20 19:03:42.159354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf58560) 00:26:19.968 [2024-11-20 19:03:42.159376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.968 [2024-11-20 19:03:42.159384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.968 00:26:19.968 Latency(us) 00:26:19.968 [2024-11-20T18:03:42.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.968 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:19.968 nvme0n1 : 2.00 5301.07 662.63 0.00 0.00 3015.08 659.26 11109.91 00:26:19.968 [2024-11-20T18:03:42.293Z] =================================================================================================================== 00:26:19.968 [2024-11-20T18:03:42.293Z] Total : 5301.07 662.63 0.00 0.00 3015.08 659.26 11109.91 00:26:19.968 { 00:26:19.968 "results": [ 00:26:19.968 { 00:26:19.968 "job": "nvme0n1", 00:26:19.968 "core_mask": "0x2", 00:26:19.968 "workload": "randread", 00:26:19.968 "status": "finished", 00:26:19.968 "queue_depth": 16, 00:26:19.968 "io_size": 131072, 00:26:19.968 "runtime": 2.002804, 00:26:19.968 "iops": 5301.067902800274, 00:26:19.968 "mibps": 662.6334878500343, 00:26:19.968 "io_failed": 0, 00:26:19.968 "io_timeout": 0, 00:26:19.968 "avg_latency_us": 3015.0750115941637, 00:26:19.968 "min_latency_us": 659.2609523809524, 00:26:19.968 "max_latency_us": 11109.91238095238 00:26:19.968 } 00:26:19.968 ], 00:26:19.968 "core_count": 1 00:26:19.968 } 00:26:19.968 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:19.968 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:19.968 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:19.968 | .driver_specific 00:26:19.968 | .nvme_error 00:26:19.968 | .status_code 00:26:19.968 | .command_transient_transport_error' 00:26:19.968 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 343 > 0 )) 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3792148 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3792148 ']' 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3792148 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792148 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792148' 00:26:20.228 killing process with pid 3792148 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3792148 00:26:20.228 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.228 00:26:20.228 Latency(us) 00:26:20.228 [2024-11-20T18:03:42.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.228 [2024-11-20T18:03:42.553Z] =================================================================================================================== 00:26:20.228 [2024-11-20T18:03:42.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.228 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3792148 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3792686 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3792686 /var/tmp/bperf.sock 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3792686 ']' 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.487 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.487 [2024-11-20 19:03:42.640408] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:20.487 [2024-11-20 19:03:42.640455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792686 ] 00:26:20.487 [2024-11-20 19:03:42.714739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.487 [2024-11-20 19:03:42.756802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.746 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.746 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:20.746 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.746 19:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.746 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.746 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.746 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.746 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.746 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.746 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.314 nvme0n1 00:26:21.314 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:21.314 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.314 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.314 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.314 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:21.314 19:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.314 Running I/O for 2 seconds... 00:26:21.314 [2024-11-20 19:03:43.601686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6020 00:26:21.314 [2024-11-20 19:03:43.602459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.314 [2024-11-20 19:03:43.602488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:21.314 [2024-11-20 19:03:43.611139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee0ea0 00:26:21.314 [2024-11-20 19:03:43.611825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.314 [2024-11-20 19:03:43.611847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:21.314 [2024-11-20 19:03:43.620207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee5a90 00:26:21.314 [2024-11-20 19:03:43.621049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.314 [2024-11-20 19:03:43.621068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:21.314 [2024-11-20 19:03:43.629684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8088 00:26:21.314 [2024-11-20 19:03:43.630655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.314 [2024-11-20 19:03:43.630675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:21.314 [2024-11-20 19:03:43.639396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1f80 00:26:21.574 [2024-11-20 19:03:43.640534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.640554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.649019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef0788 00:26:21.574 [2024-11-20 19:03:43.650224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.650248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.658625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef57b0 00:26:21.574 [2024-11-20 19:03:43.659947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.659966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.668126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee6738 00:26:21.574 [2024-11-20 19:03:43.669563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.669582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.674489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee3498 00:26:21.574 [2024-11-20 19:03:43.675118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.675138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.686330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eefae0 00:26:21.574 [2024-11-20 19:03:43.687760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.687779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.692670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef5378 00:26:21.574 [2024-11-20 19:03:43.693293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.693313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.701851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef31b8 00:26:21.574 [2024-11-20 19:03:43.702511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.574 [2024-11-20 19:03:43.702531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.574 [2024-11-20 19:03:43.710930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef20d8 00:26:21.575 [2024-11-20 19:03:43.711570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.711590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.719932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef0ff8 00:26:21.575 [2024-11-20 19:03:43.720585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.720604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.731002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1b48 00:26:21.575 [2024-11-20 19:03:43.732194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.732217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.740583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee49b0 00:26:21.575 [2024-11-20 19:03:43.742021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.742040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.746941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6020 00:26:21.575 [2024-11-20 19:03:43.747577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.747596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.757460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef3a28 00:26:21.575 [2024-11-20 19:03:43.758138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.758157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.765915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1b48 00:26:21.575 [2024-11-20 19:03:43.767077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.767096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.774238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efa7d8 00:26:21.575 [2024-11-20 19:03:43.774887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.774905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.783274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee9e10 00:26:21.575 [2024-11-20 19:03:43.783916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.783935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.792313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef7100 00:26:21.575 [2024-11-20 19:03:43.792952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.792971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.801344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6020 00:26:21.575 [2024-11-20 19:03:43.801982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.810381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee84c0 00:26:21.575 [2024-11-20 19:03:43.811020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.811039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.819446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efc128 00:26:21.575 [2024-11-20 19:03:43.820111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.820129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.828764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efb480 00:26:21.575 [2024-11-20 19:03:43.829500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.829519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.838293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef81e0 00:26:21.575 [2024-11-20 19:03:43.839239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.839259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.846735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee4578 00:26:21.575 [2024-11-20 19:03:43.847383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.847403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.855646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efeb58 00:26:21.575 [2024-11-20 19:03:43.856310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.856330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.865257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee49b0 00:26:21.575 [2024-11-20 19:03:43.866024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.866042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.875153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efeb58 00:26:21.575 [2024-11-20 19:03:43.876472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.876492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.884709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee6738 00:26:21.575 [2024-11-20 19:03:43.885598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.885621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.575 [2024-11-20 19:03:43.895350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efa7d8 00:26:21.575 [2024-11-20 19:03:43.896882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.575 [2024-11-20 19:03:43.896902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.835 [2024-11-20 19:03:43.904644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6cc8 00:26:21.835 [2024-11-20 19:03:43.906126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.835 [2024-11-20 19:03:43.906146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:21.835 [2024-11-20 19:03:43.913763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef8e88 00:26:21.835 [2024-11-20 19:03:43.915223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.835 [2024-11-20 19:03:43.915242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:21.835 [2024-11-20 19:03:43.920178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee4578 00:26:21.835 [2024-11-20 19:03:43.920940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.835 [2024-11-20 19:03:43.920959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:21.835 [2024-11-20 19:03:43.931921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe720 00:26:21.835 [2024-11-20 19:03:43.933383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.835 [2024-11-20 19:03:43.933404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:21.835 [2024-11-20 19:03:43.938531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6458 00:26:21.835 [2024-11-20 19:03:43.939310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:43.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:43.950650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eed4e8 00:26:21.836 [2024-11-20 19:03:43.952106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:43.952126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:43.957856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efda78 00:26:21.836 [2024-11-20 19:03:43.958856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:43.958875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:43.966970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef7538 00:26:21.836 [2024-11-20 19:03:43.967757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:43.967777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:43.977256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee4578 00:26:21.836 [2024-11-20 19:03:43.978496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:43.978515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:43.984795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eec408 00:26:21.836 [2024-11-20 19:03:43.985586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:43.985606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:43.996893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6458 00:26:21.836 [2024-11-20 19:03:43.998223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:43.998242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.004040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee49b0 00:26:21.836 [2024-11-20 19:03:44.004868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.004887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.014236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efcdd0 00:26:21.836 [2024-11-20 19:03:44.015135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.015154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.024195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eecc78 00:26:21.836 [2024-11-20 19:03:44.025076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.025096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.033362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eecc78 00:26:21.836 [2024-11-20 19:03:44.034247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.034266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.042598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef5be8 00:26:21.836 [2024-11-20 19:03:44.043485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.043505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.051148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eed4e8 00:26:21.836 [2024-11-20 19:03:44.052008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.052028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.060192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef5378 00:26:21.836 [2024-11-20 19:03:44.061042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.061062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.071185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ede038 00:26:21.836 [2024-11-20 19:03:44.072507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.072527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.078978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef20d8 00:26:21.836 [2024-11-20 19:03:44.079827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.079846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.088044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef20d8 00:26:21.836 [2024-11-20 19:03:44.088889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.088909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.097362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef35f0 00:26:21.836 [2024-11-20 19:03:44.098443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.098462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.106816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8d30 00:26:21.836 [2024-11-20 19:03:44.108058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.108077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.114875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eea248 00:26:21.836 [2024-11-20 19:03:44.115638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.115658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.123409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efeb58 00:26:21.836 [2024-11-20 19:03:44.124130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.124155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.133099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee6300 00:26:21.836 [2024-11-20 19:03:44.133828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.133848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.143103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef7100 00:26:21.836 [2024-11-20 19:03:44.144291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.144311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:21.836 [2024-11-20 19:03:44.151493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef96f8 00:26:21.836 [2024-11-20 19:03:44.152403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.836 [2024-11-20 19:03:44.152423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.160715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef57b0 00:26:22.097 [2024-11-20 19:03:44.161700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.161719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.170083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efb480 00:26:22.097 [2024-11-20 19:03:44.170986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.171005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.179884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee95a0 00:26:22.097 [2024-11-20 19:03:44.181079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.181097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.188366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efef90 00:26:22.097 [2024-11-20 19:03:44.189283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.189302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.197261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef4f40 00:26:22.097 [2024-11-20 19:03:44.198246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.198264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.206781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef7970 00:26:22.097 [2024-11-20 19:03:44.207878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.207897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.215699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efdeb0 00:26:22.097 [2024-11-20 19:03:44.216558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.224988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efa7d8 00:26:22.097 [2024-11-20 19:03:44.226077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.226097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.232188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee5ec8 00:26:22.097 [2024-11-20 19:03:44.232808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.232827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.244147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6890 00:26:22.097 [2024-11-20 19:03:44.245498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.245518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.250569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef4f40 00:26:22.097 [2024-11-20 19:03:44.251187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.251211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.262374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eec408 00:26:22.097 [2024-11-20 19:03:44.263728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.263746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.268817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efa7d8 00:26:22.097 [2024-11-20 19:03:44.269459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.269478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.279779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee0ea0 00:26:22.097 [2024-11-20 19:03:44.280891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.280910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.289245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016edfdc0 00:26:22.097 [2024-11-20 19:03:44.290489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.290509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.297661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef9b30 00:26:22.097 [2024-11-20 19:03:44.298614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.298633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.306656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efcdd0 00:26:22.097 [2024-11-20 19:03:44.307700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.307719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.315541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efc998 00:26:22.097 [2024-11-20 19:03:44.316333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.316352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.325507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eeee38 00:26:22.097 [2024-11-20 19:03:44.326746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.326764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.333336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef1868 00:26:22.097 [2024-11-20 19:03:44.334106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.334125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.342359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef1868 00:26:22.097 [2024-11-20 19:03:44.343126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.343145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.351640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef81e0 00:26:22.097 [2024-11-20 19:03:44.352647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.352667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.360822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee95a0 00:26:22.097 [2024-11-20 19:03:44.361752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.361775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.370525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eef6a8 00:26:22.097 [2024-11-20 19:03:44.371734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.371753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.380000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8d30 00:26:22.097 [2024-11-20 19:03:44.381332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.097 [2024-11-20 19:03:44.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:22.097 [2024-11-20 19:03:44.389449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe2e8 00:26:22.097 [2024-11-20 19:03:44.390912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.098 [2024-11-20 19:03:44.390931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.098 [2024-11-20 19:03:44.395935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef5be8 00:26:22.098 [2024-11-20 19:03:44.396705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.098 [2024-11-20 19:03:44.396724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.098 [2024-11-20 19:03:44.406912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eef6a8 00:26:22.098 [2024-11-20 19:03:44.408048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.098 [2024-11-20 19:03:44.408068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.098 [2024-11-20 19:03:44.413655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe720 00:26:22.098 [2024-11-20 19:03:44.414278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.098 [2024-11-20 19:03:44.414297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.423321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef81e0 00:26:22.357 [2024-11-20 19:03:44.424084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.424102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.434432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef57b0 00:26:22.357 [2024-11-20 19:03:44.435463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.435482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.442028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efef90 00:26:22.357 [2024-11-20 19:03:44.442498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.442517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.451532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef4f40 00:26:22.357 [2024-11-20 19:03:44.452083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.452103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.460668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe720 00:26:22.357 [2024-11-20 19:03:44.461464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.461483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.469589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe720 00:26:22.357 [2024-11-20 19:03:44.470581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.470600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.478754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef96f8 00:26:22.357 [2024-11-20 19:03:44.479739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.488137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efda78 00:26:22.357 [2024-11-20 19:03:44.489133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.489152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.496586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efb480 00:26:22.357 [2024-11-20 19:03:44.497450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.497470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.505915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efb480 00:26:22.357 [2024-11-20 19:03:44.506712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.506732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.514927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efb480 00:26:22.357 [2024-11-20 19:03:44.515722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.515748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.523355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef2d80 00:26:22.357 [2024-11-20 19:03:44.524192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.524215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.532913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efdeb0 00:26:22.357 [2024-11-20 19:03:44.533887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.533906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.542590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efc998 00:26:22.357 [2024-11-20 19:03:44.543740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.543759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.551745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efbcf0 00:26:22.357 [2024-11-20 19:03:44.552411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.552432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.560906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef2510 00:26:22.357 [2024-11-20 19:03:44.561814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.561834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.569235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef2510 00:26:22.357 [2024-11-20 19:03:44.570077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.570095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.577678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee3498 00:26:22.357 [2024-11-20 19:03:44.578408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.578427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.357 [2024-11-20 19:03:44.586502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efb480 00:26:22.357 [2024-11-20 19:03:44.587241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.357 [2024-11-20 19:03:44.587260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.357 27797.00 IOPS, 108.58 MiB/s [2024-11-20T18:03:44.683Z] [2024-11-20 19:03:44.596491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee6300 00:26:22.358 [2024-11-20 19:03:44.597135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.597159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.604916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee4140 00:26:22.358 [2024-11-20 19:03:44.605661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.605681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.614369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1b48 00:26:22.358 [2024-11-20 19:03:44.615281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.615302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.623807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eecc78 00:26:22.358 [2024-11-20 19:03:44.624221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.624241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.634070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee6300 00:26:22.358 [2024-11-20 19:03:44.635179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.635198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.641623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee73e0 00:26:22.358 [2024-11-20 19:03:44.642168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.642187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.650694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efd208 00:26:22.358 [2024-11-20 19:03:44.651249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.651269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.659731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6890 00:26:22.358 [2024-11-20 19:03:44.660290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.660309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.668869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016edf118 00:26:22.358 [2024-11-20 19:03:44.669421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.669440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.358 [2024-11-20 19:03:44.677937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eee190 00:26:22.358 [2024-11-20 19:03:44.678510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-11-20 19:03:44.678530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.617 [2024-11-20 19:03:44.687254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef1868 00:26:22.617 [2024-11-20 19:03:44.687817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.617 [2024-11-20 19:03:44.687836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.617 [2024-11-20 19:03:44.695613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1b48 00:26:22.617 [2024-11-20 19:03:44.696228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.617 [2024-11-20 19:03:44.696247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.617 [2024-11-20 19:03:44.705101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eeaab8 00:26:22.617 [2024-11-20 19:03:44.705855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.617 [2024-11-20 19:03:44.705874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.617 [2024-11-20 19:03:44.714352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ede8a8 00:26:22.617 [2024-11-20 19:03:44.715092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.617 [2024-11-20 19:03:44.715112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.617 [2024-11-20 19:03:44.725261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee0630 00:26:22.617 [2024-11-20 19:03:44.726509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.617 [2024-11-20 19:03:44.726532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:22.617 [2024-11-20 19:03:44.733240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eddc00 00:26:22.617 [2024-11-20 19:03:44.733707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.617 [2024-11-20 19:03:44.733727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.617 [2024-11-20 19:03:44.742556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee5ec8 00:26:22.618 [2024-11-20 19:03:44.743098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.743118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.751710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eff3c8 00:26:22.618 [2024-11-20 19:03:44.752500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.752519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.761634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eecc78 00:26:22.618 [2024-11-20 19:03:44.762850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.762869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.769123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eef6a8 00:26:22.618 [2024-11-20 19:03:44.769544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.769563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.778217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1f80 00:26:22.618 [2024-11-20 19:03:44.778864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.778884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.787068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1f80 00:26:22.618 [2024-11-20 19:03:44.787921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.787941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.796223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee4578 00:26:22.618 [2024-11-20 19:03:44.797068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.797086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.805514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eeaab8 00:26:22.618 [2024-11-20 19:03:44.806392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.806411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.813918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eea680 00:26:22.618 [2024-11-20 19:03:44.814671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.814690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.823270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee5ec8 00:26:22.618 [2024-11-20 19:03:44.823926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.823945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.832574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ede470 00:26:22.618 [2024-11-20 19:03:44.833425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.833447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.840682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eebb98 00:26:22.618 [2024-11-20 19:03:44.841408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.841427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.850128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee9168 00:26:22.618 [2024-11-20 19:03:44.850992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.851012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.861069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe2e8 00:26:22.618 [2024-11-20 19:03:44.862199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.862228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.868759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee4140 00:26:22.618 [2024-11-20 19:03:44.869306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.869325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.879844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef1430 00:26:22.618 [2024-11-20 19:03:44.881320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.881339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.886338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef5378 00:26:22.618 [2024-11-20 19:03:44.887075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.887093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.895506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eebfd0 00:26:22.618 [2024-11-20 19:03:44.896239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.896259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.904786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efd640 00:26:22.618 [2024-11-20 19:03:44.905501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.905520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.914207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016edf550 00:26:22.618 [2024-11-20 19:03:44.915091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.922985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efc128 00:26:22.618 [2024-11-20 19:03:44.923779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.923798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:22.618 [2024-11-20 19:03:44.931935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efc560 00:26:22.618 [2024-11-20 19:03:44.932679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.618 [2024-11-20 19:03:44.932699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:44.943395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef7538 00:26:22.878 [2024-11-20 19:03:44.944885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:44.944903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:44.949948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe2e8 00:26:22.878 [2024-11-20 19:03:44.950487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:44.950506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:44.959471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eee5c8 00:26:22.878 [2024-11-20 19:03:44.960228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:44.960247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:44.969842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eee190 00:26:22.878 [2024-11-20 19:03:44.970642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:44.970661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:44.978570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe2e8 00:26:22.878 [2024-11-20 19:03:44.979604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:44.979624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:44.987585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee4140 00:26:22.878 [2024-11-20 19:03:44.988477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:44.988496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:44.996547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ede038 00:26:22.878 [2024-11-20 19:03:44.997429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:44.997448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.005230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee01f8 00:26:22.878 [2024-11-20 19:03:45.006186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.006208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.015011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef0350 00:26:22.878 [2024-11-20 19:03:45.015768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.015788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.024940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef81e0 00:26:22.878 [2024-11-20 19:03:45.026327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.026346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.032106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe2e8 00:26:22.878 [2024-11-20 19:03:45.033014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.033032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.041113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef9f68 00:26:22.878 [2024-11-20 19:03:45.042174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.042195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.051107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef5378 00:26:22.878 [2024-11-20 19:03:45.052245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.052264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.058280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee3060 00:26:22.878 [2024-11-20 19:03:45.058953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.058971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.068093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef96f8 00:26:22.878 [2024-11-20 19:03:45.068581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.068603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.878 [2024-11-20 19:03:45.077991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eeb760 00:26:22.878 [2024-11-20 19:03:45.079096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.878 [2024-11-20 19:03:45.079114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.085132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8088 00:26:22.879 [2024-11-20 19:03:45.085773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.085792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.094576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016edfdc0 00:26:22.879 [2024-11-20 19:03:45.095367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.095386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.104392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef4f40 00:26:22.879 [2024-11-20 19:03:45.104953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.104973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.113428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efc560 00:26:22.879 [2024-11-20 19:03:45.114294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.114313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.122008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee5220 00:26:22.879 [2024-11-20 19:03:45.122914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.122933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.131715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee9168 00:26:22.879 [2024-11-20 19:03:45.132731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.132750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.140647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee95a0 00:26:22.879 [2024-11-20 19:03:45.141857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.141875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.151456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee5658 00:26:22.879 [2024-11-20 19:03:45.152869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.152890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.157781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8d30 00:26:22.879 [2024-11-20 19:03:45.158406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.158424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.166816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eeea00 00:26:22.879 [2024-11-20 19:03:45.167530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.167548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.176630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee84c0 00:26:22.879 [2024-11-20 19:03:45.177237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.177256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.186101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee49b0 00:26:22.879 [2024-11-20 19:03:45.186745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.186764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.879 [2024-11-20 19:03:45.195991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efb8b8 00:26:22.879 [2024-11-20 19:03:45.197269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.879 [2024-11-20 19:03:45.197287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.204555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef3a28 00:26:23.139 [2024-11-20 19:03:45.205514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.205533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.213850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee2c28 00:26:23.139 [2024-11-20 19:03:45.214672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.214691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.222393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef9b30 00:26:23.139 [2024-11-20 19:03:45.223659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.223677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.230729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eebb98 00:26:23.139 [2024-11-20 19:03:45.231473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.231493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.241043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee88f8 00:26:23.139 [2024-11-20 19:03:45.242209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.242228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.248337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee6300 00:26:23.139 [2024-11-20 19:03:45.249022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.249041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.258133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efe720 00:26:23.139 [2024-11-20 19:03:45.258732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.258751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.267590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef3e60 00:26:23.139 [2024-11-20 19:03:45.268192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.268219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.276712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efdeb0 00:26:23.139 [2024-11-20 19:03:45.277740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.277759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.285571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eec408 00:26:23.139 [2024-11-20 19:03:45.286826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.286846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.295558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef0ff8 00:26:23.139 [2024-11-20 19:03:45.296682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.296702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.304046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee2c28 00:26:23.139 [2024-11-20 19:03:45.305171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.305189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.311277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee99d8 00:26:23.139 [2024-11-20 19:03:45.311941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.311960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.321099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eef270 00:26:23.139 [2024-11-20 19:03:45.321569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.321589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.331492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8088 00:26:23.139 [2024-11-20 19:03:45.332653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.332673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.339614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef0350 00:26:23.139 [2024-11-20 19:03:45.340895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.340915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.350505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee99d8 00:26:23.139 [2024-11-20 19:03:45.351975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.351994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.356847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eef270 00:26:23.139 [2024-11-20 19:03:45.357510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.357529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.367218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eefae0 00:26:23.139 [2024-11-20 19:03:45.367920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.367940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.375832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef1ca0 00:26:23.139 [2024-11-20 19:03:45.377126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.377144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.386502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eee5c8 00:26:23.139 [2024-11-20 19:03:45.387565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.387589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.396109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eea248 00:26:23.139 [2024-11-20 19:03:45.397448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.397468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.405577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016efd640 00:26:23.139 [2024-11-20 19:03:45.406958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.406977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.413397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eed4e8 00:26:23.139 [2024-11-20 19:03:45.414401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.414420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.422910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef81e0 00:26:23.139 [2024-11-20 19:03:45.424146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.139 [2024-11-20 19:03:45.424165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.139 [2024-11-20 19:03:45.430090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee9168 00:26:23.139 [2024-11-20 19:03:45.430826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.140 [2024-11-20 19:03:45.430845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.140 [2024-11-20 19:03:45.439288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eeb760 00:26:23.140 [2024-11-20 19:03:45.440044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.140 [2024-11-20 19:03:45.440063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.140 [2024-11-20 19:03:45.448863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee1710 00:26:23.140 [2024-11-20 19:03:45.449632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.140 [2024-11-20 19:03:45.449653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.140 [2024-11-20 19:03:45.459009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee6738 00:26:23.140 [2024-11-20 19:03:45.460163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.140 [2024-11-20 19:03:45.460182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.467139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6890 00:26:23.399 [2024-11-20 19:03:45.467923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.467942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.477397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016edf118 00:26:23.399 [2024-11-20 19:03:45.478615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.478634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.486624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8d30 00:26:23.399 [2024-11-20 19:03:45.487738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.487757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.494136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ede038 00:26:23.399 [2024-11-20 19:03:45.494893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.494911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.504671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef1430 00:26:23.399 [2024-11-20 19:03:45.505665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.505684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.513028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee0ea0 00:26:23.399 [2024-11-20 19:03:45.514303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.514322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.523128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef1430 00:26:23.399 [2024-11-20 19:03:45.524433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.524452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.532627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee8d30 00:26:23.399 [2024-11-20 19:03:45.534035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.534054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.542100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eed920 00:26:23.399 [2024-11-20 19:03:45.543645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.543664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.548608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee3d08 00:26:23.399 [2024-11-20 19:03:45.549362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.549382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.558252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee27f0 00:26:23.399 [2024-11-20 19:03:45.559003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.559021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.566557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee7c50 00:26:23.399 [2024-11-20 19:03:45.567362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.567381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.575989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ef6890 00:26:23.399 [2024-11-20 19:03:45.576947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.576966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.399 [2024-11-20 19:03:45.585841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016eea680 00:26:23.399 [2024-11-20 19:03:45.586580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.586601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.399 28000.00 IOPS, 109.38 MiB/s [2024-11-20T18:03:45.724Z] [2024-11-20 19:03:45.595135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c04180) with pdu=0x200016ee49b0 00:26:23.399 [2024-11-20 19:03:45.596077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.399 [2024-11-20 19:03:45.596096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.399 00:26:23.399 Latency(us) 00:26:23.399 [2024-11-20T18:03:45.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.399 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.399 nvme0n1 : 2.01 27990.23 109.34 0.00 0.00 4567.30 2184.53 13044.78 00:26:23.399 [2024-11-20T18:03:45.724Z] =================================================================================================================== 00:26:23.399 [2024-11-20T18:03:45.724Z] Total : 27990.23 109.34 0.00 0.00 4567.30 2184.53 13044.78 00:26:23.399 { 00:26:23.400 "results": [ 00:26:23.400 { 00:26:23.400 "job": "nvme0n1", 00:26:23.400 "core_mask": "0x2", 00:26:23.400 "workload": "randwrite", 00:26:23.400 "status": "finished", 00:26:23.400 "queue_depth": 128, 00:26:23.400 "io_size": 4096, 00:26:23.400 "runtime": 2.005271, 00:26:23.400 "iops": 27990.23174423806, 00:26:23.400 "mibps": 109.33684275092992, 00:26:23.400 "io_failed": 0, 00:26:23.400 "io_timeout": 0, 00:26:23.400 "avg_latency_us": 4567.295624694576, 00:26:23.400 "min_latency_us": 2184.5333333333333, 00:26:23.400 "max_latency_us": 13044.784761904762 00:26:23.400 } 00:26:23.400 ], 00:26:23.400 "core_count": 1 00:26:23.400 } 00:26:23.400 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.400 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.400 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.400 | .driver_specific 00:26:23.400 | .nvme_error 00:26:23.400 | .status_code 00:26:23.400 | .command_transient_transport_error' 00:26:23.400 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3792686 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3792686 ']' 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3792686 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792686 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792686' 00:26:23.659 killing process with pid 3792686 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3792686 00:26:23.659 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.659 00:26:23.659 Latency(us) 00:26:23.659 [2024-11-20T18:03:45.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.659 [2024-11-20T18:03:45.984Z] =================================================================================================================== 00:26:23.659 [2024-11-20T18:03:45.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.659 19:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3792686 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3793318 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3793318 /var/tmp/bperf.sock 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3793318 ']' 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.918 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.918 [2024-11-20 19:03:46.068774] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:23.918 [2024-11-20 19:03:46.068828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793318 ] 00:26:23.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.918 Zero copy mechanism will not be used. 00:26:23.918 [2024-11-20 19:03:46.142892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.918 [2024-11-20 19:03:46.179362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.177 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.746 nvme0n1 00:26:24.746 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:24.746 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.746 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.746 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.746 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:24.746 19:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.746 Zero copy mechanism will not be used. 00:26:24.746 Running I/O for 2 seconds... 00:26:24.746 [2024-11-20 19:03:46.953849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.953918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.953949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.958723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.958945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.958970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.964081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.964397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.964420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.969895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.970212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.970234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.976314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.976611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.976633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.981473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.981715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.981736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.986476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.986730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.986750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.991717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.991988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.992008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:46.997704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:46.997944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:46.997964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:47.002810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:47.003058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:47.003079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:47.008391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:47.008640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:47.008660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:47.013180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:47.013397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:47.013417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:47.018047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:47.018268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:47.018287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:47.022915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:47.023105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-20 19:03:47.023125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.746 [2024-11-20 19:03:47.027609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.746 [2024-11-20 19:03:47.027821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.027842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.031792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.032004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.032025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.037050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.037298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.042861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.043078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.043099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.047371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.047588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.047613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.051680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.051910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.051931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.055822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.056031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.056052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.060234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.060446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.060468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.064242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.064423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.064442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.747 [2024-11-20 19:03:47.068109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:24.747 [2024-11-20 19:03:47.068302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-20 19:03:47.068321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.007 [2024-11-20 19:03:47.071920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.007 [2024-11-20 19:03:47.072074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.007 [2024-11-20 19:03:47.072094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.007 [2024-11-20 19:03:47.075762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.007 [2024-11-20 19:03:47.075912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.007 [2024-11-20 19:03:47.075932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.007 [2024-11-20 19:03:47.079630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.007 [2024-11-20 19:03:47.079814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.007 [2024-11-20 19:03:47.079834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.007 [2024-11-20 19:03:47.083494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.007 [2024-11-20 19:03:47.083655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.007 [2024-11-20 19:03:47.083677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.007 [2024-11-20 19:03:47.087323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.007 [2024-11-20 19:03:47.087487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.087508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.091124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.091290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.091309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.094893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.095045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.095064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.098632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.098793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.098812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.102410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.102560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.102579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.106162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.106322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.106341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.109921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.110107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.110126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.113682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.113836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.113855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.117431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.117615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.117633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.121176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.121341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.121360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.124900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.125078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.125096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.128682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.128832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.128851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.132426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.132594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.132615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.136182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.136340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.136359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.139944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.140124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.140143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.143732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.143880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.143899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.147529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.147677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.147699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.151425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.151561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.151579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.155953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.156096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.156115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.160172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.160332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.160351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.164235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.164377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.164396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.168235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.168383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.168402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.008 [2024-11-20 19:03:47.172144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.008 [2024-11-20 19:03:47.172327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-20 19:03:47.172346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.175990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.176167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.176185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.179962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.180134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.180152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.184091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.184237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.184256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.188860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.188976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.188995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.193486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.193629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.193648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.197593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.197744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.197765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.201525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.201679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.201697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.205465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.205613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.205632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.209355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.209524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.209546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.213340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.213487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.213506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.217588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.217699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.217718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.222803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.222966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.222985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.226931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.227094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.227113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.230941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.231083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.231102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.235024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.235169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.235187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.239150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.239324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.239343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.243283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.243435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.243455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.247264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.247434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.247454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.251195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.251355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.251374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.255372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.255511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.255534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.260093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.260252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.260271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.264395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.264563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.264584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.268597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.009 [2024-11-20 19:03:47.268729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-20 19:03:47.268748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.009 [2024-11-20 19:03:47.273253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.273413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.273432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.277532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.277627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.277646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.281936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.282134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.282155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.287418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.287547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.287566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.291445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.291591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.291619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.295978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.296134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.296153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.300721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.300858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.300876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.305193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.305332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.305350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.309612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.309716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.309736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.314303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.314451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.314469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.318311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.318466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.318485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.322165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.322320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.322339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.326143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.326301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.326320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.010 [2024-11-20 19:03:47.330282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.010 [2024-11-20 19:03:47.330431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-20 19:03:47.330451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.334298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.334453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.334473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.338292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.338465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.342285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.342427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.342445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.346131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.346289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.346308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.350053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.350209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.350229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.354167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.354294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.354313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.358778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.358936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.358954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.363129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.363278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.363297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.367620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.367755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.367777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.371953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.372098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.372116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.376171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.376318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.376336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.380102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.380270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.380289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.384942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.385075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.385094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.389273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.389409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.389427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.393294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.393437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.393456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.397111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.397298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.397316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.401006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.401151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.401170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.404920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.405059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.405077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.409071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.409260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.409278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.412870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.413030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.278 [2024-11-20 19:03:47.413049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.278 [2024-11-20 19:03:47.416815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.278 [2024-11-20 19:03:47.416980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.416998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.420794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.420952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.420969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.424656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.424816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.428693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.428839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.433551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.433700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.433719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.438089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.438253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.438272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.442091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.442281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.442300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.446163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.446330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.446349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.450228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.450384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.450402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.454068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.454234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.454252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.458020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.458179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.458197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.462019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.462176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.462195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.465999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.466156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.466174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.469881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.470042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.470060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.473837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.473990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.474013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.477998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.478171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.478190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.482388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.482512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.482531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.487104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.487277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.487295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.491125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.491285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.491304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.495104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.495267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.495285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.499065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.499227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.499245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.502891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.503051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.503070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.506829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.506990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.507007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.510948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.511120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.511138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.515881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.516031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.516050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.520188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.520353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.520372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.524151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.524323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.524341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.528083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.528240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.528259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.531969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.532122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.532140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.536055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.536228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.536246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.539892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.540055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.540074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.543669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.543825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.543843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.547618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.547774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.547794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.552060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.552228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.552247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.556861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.557037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.557055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.279 [2024-11-20 19:03:47.560870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.279 [2024-11-20 19:03:47.561017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.279 [2024-11-20 19:03:47.561035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.280 [2024-11-20 19:03:47.565341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.280 [2024-11-20 19:03:47.565557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.280 [2024-11-20 19:03:47.565577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.280 [2024-11-20 19:03:47.570733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.280 [2024-11-20 19:03:47.571140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.280 [2024-11-20 19:03:47.571160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.280 [2024-11-20 19:03:47.576375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.280 [2024-11-20 19:03:47.576510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.280 [2024-11-20 19:03:47.576529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.280 [2024-11-20 19:03:47.582437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.280 [2024-11-20 19:03:47.582668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.280 [2024-11-20 19:03:47.582688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.280 [2024-11-20 19:03:47.588909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.280 [2024-11-20 19:03:47.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.280 [2024-11-20 19:03:47.589110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.280 [2024-11-20 19:03:47.595244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.280 [2024-11-20 19:03:47.595541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.280 [2024-11-20 19:03:47.595562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.539 [2024-11-20 19:03:47.601541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.539 [2024-11-20 19:03:47.601824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.539 [2024-11-20 19:03:47.601845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.539 [2024-11-20 19:03:47.608331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.539 [2024-11-20 19:03:47.608548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.539 [2024-11-20 19:03:47.608569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.539 [2024-11-20 19:03:47.614773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.539 [2024-11-20 19:03:47.615031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.539 [2024-11-20 19:03:47.615052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.539 [2024-11-20 19:03:47.621329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.539 [2024-11-20 19:03:47.621497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.539 [2024-11-20 19:03:47.621516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.539 [2024-11-20 19:03:47.627804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.539 [2024-11-20 19:03:47.628068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.628089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.634238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.634532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.634553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.641051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.641349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.641371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.647531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.647708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.647729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.654571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.654722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.654741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.661549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.661676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.661694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.668378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.668539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.668558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.675155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.675326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.675344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.681842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.681969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.681987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.688029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.688099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.688117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.692349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.692420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.692438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.696353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.696422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.696441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.700324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.700405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.700424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.704389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.704460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.704479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.708505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.708574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.708592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.712690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.712764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.712783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.716702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.716771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.716790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.721102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.721172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.721191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.725028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.725098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.725117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.728876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.728944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.728963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.732735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.732805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.732827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.736596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.736667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.736686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.740415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.740497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.740516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.744266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.744341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.744360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.749340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.749415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.749433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.754170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.754243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.758200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.758278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.540 [2024-11-20 19:03:47.758297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.540 [2024-11-20 19:03:47.762141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.540 [2024-11-20 19:03:47.762220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.762238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.765955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.766022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.766039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.770448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.770570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.770587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.774527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.774596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.774615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.779091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.779196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.779223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.784805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.784996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.785015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.790806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.790959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.790978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.797238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.797402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.797420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.804211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.804396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.804415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.810731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.810894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.810912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.817032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.817122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.817140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.823897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.824064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.824083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.829943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.830083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.830101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.836579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.836679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.836698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.842319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.842526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.842545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.848308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.848418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.848436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.854349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.854503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.854521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.541 [2024-11-20 19:03:47.860342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.541 [2024-11-20 19:03:47.860489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.541 [2024-11-20 19:03:47.860508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.866780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.866960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.866978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.872766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.872906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.872928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.877064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.877131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.877150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.881067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.881134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.881153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.885184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.885266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.885285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.889095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.889162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.889180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.893038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.893104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.893123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.897054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.897133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.897151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.901638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.901706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.901724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.906738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.906805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.906823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.910736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.910821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.910839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.801 [2024-11-20 19:03:47.914685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.801 [2024-11-20 19:03:47.914756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.801 [2024-11-20 19:03:47.914774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.918490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.918560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.918578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.922324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.922397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.922416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.926449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.926515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.926533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.931110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.931179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.931196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.935509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.935586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.935604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.941064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.941165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.941183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.947971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.948157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.948175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.953951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.955094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.955115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.802 6757.00 IOPS, 844.62 MiB/s [2024-11-20T18:03:48.127Z] [2024-11-20 19:03:47.960785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.960927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.960962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.967654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.967803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.967823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.974646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.974761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.974781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.981399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.981570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.981589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.988023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.988218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.988237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:47.994765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:47.994903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:47.994922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.001161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.001306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.001324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.007580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.007719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.007741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.014211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.014408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.014426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.020657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.020784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.020803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.027312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.027503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.027522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.033548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.033600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.033618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.039524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.039704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.039723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.046139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.046352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.046373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.052520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.052684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.052703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.059008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.059168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.059187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.065434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.065624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.065644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.072197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.072320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.802 [2024-11-20 19:03:48.072339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.802 [2024-11-20 19:03:48.079026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.802 [2024-11-20 19:03:48.079198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.079225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.803 [2024-11-20 19:03:48.086000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.803 [2024-11-20 19:03:48.086126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.086146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.803 [2024-11-20 19:03:48.092185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.803 [2024-11-20 19:03:48.092363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.092382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.803 [2024-11-20 19:03:48.098262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.803 [2024-11-20 19:03:48.098395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.098414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.803 [2024-11-20 19:03:48.104106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.803 [2024-11-20 19:03:48.104278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.104297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.803 [2024-11-20 19:03:48.110118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.803 [2024-11-20 19:03:48.110283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.110302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.803 [2024-11-20 19:03:48.116567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.803 [2024-11-20 19:03:48.116723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.116742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.803 [2024-11-20 19:03:48.122166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:25.803 [2024-11-20 19:03:48.122317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.803 [2024-11-20 19:03:48.122336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.127536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.127645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.127663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.132332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.132502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.132520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.138153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.138304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.138322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.142832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.142942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.142961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.147062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.147143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.147161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.151613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.151690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.151709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.156003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.156117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.156135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.160473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.160556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.160582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.164964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.064 [2024-11-20 19:03:48.165046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-11-20 19:03:48.165065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.064 [2024-11-20 19:03:48.169282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.169384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.169402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.173494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.173620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.173640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.177692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.177807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.177827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.182026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.182099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.182118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.186213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.186295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.186314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.190646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.190751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.190770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.195098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.195209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.195227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.199338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.199432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.199453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.203599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.203691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.203710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.207913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.208050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.208069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.212121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.212241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.212261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.216409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.216495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.216514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.220736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.220841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.220860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.225011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.225115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.225134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.229521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.229598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.229616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.233865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.233939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.233959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.239259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.239318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.239338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.244877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.244942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.244961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.250394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.250447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.250466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.255409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.255480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.260229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.260296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.260314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.264818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.264867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.264885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.268755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.268807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.268825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.272654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.272707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.272725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.276471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.276536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.276557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.280273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.280326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.280344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.284057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.284108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.065 [2024-11-20 19:03:48.284127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.065 [2024-11-20 19:03:48.287852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.065 [2024-11-20 19:03:48.287904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.287922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.291627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.291686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.291704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.295378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.295461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.295480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.299380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.299437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.299455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.303165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.303236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.303255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.306915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.306973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.306991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.310693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.310751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.310770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.314474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.314527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.314546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.318214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.318266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.318283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.321946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.321998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.322016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.325709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.325766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.325784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.329482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.329537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.329555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.333256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.333309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.333328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.337092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.337144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.337163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.341157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.341216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.341235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.345652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.345703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.345722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.349605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.349665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.349684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.353544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.353596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.353614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.357540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.357593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.357611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.361881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.362002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.362020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.367683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.367785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.367804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.374263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.374446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.374466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.381240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.381406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.381424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.066 [2024-11-20 19:03:48.387739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.066 [2024-11-20 19:03:48.387874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.066 [2024-11-20 19:03:48.387898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.394947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.395060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.395079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.401392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.401583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.401601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.408226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.408320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.408339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.414949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.415135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.415153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.421617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.421770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.421790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.428722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.428907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.428926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.435057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.435163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.435181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.441030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.441362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.441384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.446973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.447147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.327 [2024-11-20 19:03:48.447166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.327 [2024-11-20 19:03:48.453223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.327 [2024-11-20 19:03:48.453396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.453415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.458861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.459065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.459085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.463270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.463323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.463341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.467318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.467408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.467426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.471218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.471281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.471299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.475280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.475336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.475355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.479957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.480009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.480027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.484496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.484616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.484635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.488568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.488632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.488650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.492843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.492912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.492930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.496827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.496903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.496921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.500837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.500888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.500906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.504736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.504811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.504830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.508661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.508717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.508735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.512444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.512517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.512535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.516638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.516693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.516710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.520455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.520548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.520570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.524888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.525057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.525075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.530057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.530261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.530280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.535488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.535645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.535664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.540681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.540853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.540872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.545892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.546065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.546084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.551048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.551144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.551163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.556180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.556349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.556367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.561276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.561430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.561448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.566357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.566493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.566512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.571790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.572008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.328 [2024-11-20 19:03:48.572028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.328 [2024-11-20 19:03:48.576942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.328 [2024-11-20 19:03:48.577087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.577105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.582556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.582730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.582748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.587637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.587813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.587831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.592849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.593015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.593033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.598092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.598238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.598257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.603266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.603429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.603449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.608700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.608876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.608897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.613745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.613829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.613848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.617972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.618165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.618183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.622966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.623117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.623136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.628164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.628321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.628340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.633987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.634123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.634142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.638694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.638746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.638765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.642725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.642841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.642859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.646864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.646951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.646971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.329 [2024-11-20 19:03:48.651019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.329 [2024-11-20 19:03:48.651113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.329 [2024-11-20 19:03:48.651135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.655678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.655815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.655834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.661347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.661498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.661517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.667401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.667548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.667568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.673293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.673435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.673454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.679720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.679804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.679823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.686184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.686382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.686401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.692832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.692886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.692904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.698468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.698530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.698548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.704266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.704361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.704379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.710316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.710426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.710445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.715334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.715391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.715410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.719790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.719849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.719868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.724793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.724873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.724892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.730040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.730092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.730110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.734708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.734761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.734779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.739542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.739604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.739622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.743974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.744035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.744054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.748945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.748997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.749014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.753357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.753424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.753441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.757746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.757810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.757828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.762788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.762855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.762873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.767586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.767653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.767671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.772252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.772312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.772330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.776422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.776485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.590 [2024-11-20 19:03:48.776503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.590 [2024-11-20 19:03:48.780653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.590 [2024-11-20 19:03:48.780744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.780763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.784841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.784908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.789268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.789329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.789347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.793323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.793377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.793395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.797511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.797562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.797579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.802386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.802447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.802466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.806563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.806652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.806671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.810751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.810812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.810829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.814635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.814697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.814716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.818599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.818651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.818670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.822646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.822737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.822757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.826653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.826707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.826725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.830623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.830684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.830702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.834601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.834654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.838626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.838677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.838695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.842669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.842732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.842750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.846995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.847046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.847065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.850945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.851006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.851025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.854742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.854804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.854823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.858625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.858676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.858695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.862846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.862895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.862913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.867320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.867372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.867390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.871596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.871662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.871680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.875583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.875651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.875669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.879468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.879536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.879554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.883309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.883373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.883391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.887432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.887501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.591 [2024-11-20 19:03:48.887520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.591 [2024-11-20 19:03:48.892125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.591 [2024-11-20 19:03:48.892190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.592 [2024-11-20 19:03:48.892217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.592 [2024-11-20 19:03:48.896703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.592 [2024-11-20 19:03:48.896786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.592 [2024-11-20 19:03:48.896804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.592 [2024-11-20 19:03:48.900938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.592 [2024-11-20 19:03:48.901005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.592 [2024-11-20 19:03:48.901023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.592 [2024-11-20 19:03:48.905476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.592 [2024-11-20 19:03:48.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.592 [2024-11-20 19:03:48.905562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.592 [2024-11-20 19:03:48.910659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.592 [2024-11-20 19:03:48.910731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.592 [2024-11-20 19:03:48.910750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.914739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.914807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.914825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.918761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.918843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.922817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.922883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.922901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.926888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.926956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.926974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.930997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.931092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.931111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.935506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.935609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.935627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.939582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.939655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.939673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.943603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.943681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.943699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.851 [2024-11-20 19:03:48.947563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.851 [2024-11-20 19:03:48.947654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.851 [2024-11-20 19:03:48.947672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.852 [2024-11-20 19:03:48.951494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.852 [2024-11-20 19:03:48.951576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.852 [2024-11-20 19:03:48.951594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.852 [2024-11-20 19:03:48.955533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c044c0) with pdu=0x200016eff3c8 00:26:26.852 [2024-11-20 19:03:48.955600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.852 [2024-11-20 19:03:48.955617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.852 6558.50 IOPS, 819.81 MiB/s 00:26:26.852 Latency(us) 00:26:26.852 [2024-11-20T18:03:49.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.852 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:26.852 nvme0n1 : 2.00 6558.02 819.75 0.00 0.00 2435.66 1544.78 7427.41 00:26:26.852 [2024-11-20T18:03:49.177Z] =================================================================================================================== 00:26:26.852 [2024-11-20T18:03:49.177Z] Total : 6558.02 819.75 0.00 0.00 2435.66 1544.78 7427.41 00:26:26.852 { 00:26:26.852 "results": [ 00:26:26.852 { 00:26:26.852 "job": "nvme0n1", 00:26:26.852 "core_mask": "0x2", 00:26:26.852 "workload": "randwrite", 00:26:26.852 "status": "finished", 00:26:26.852 "queue_depth": 16, 00:26:26.852 "io_size": 131072, 00:26:26.852 "runtime": 2.003195, 00:26:26.852 "iops": 6558.023557367106, 00:26:26.852 "mibps": 819.7529446708883, 00:26:26.852 "io_failed": 0, 00:26:26.852 "io_timeout": 0, 00:26:26.852 "avg_latency_us": 2435.6641581574395, 00:26:26.852 "min_latency_us": 1544.777142857143, 00:26:26.852 "max_latency_us": 7427.413333333333 00:26:26.852 } 00:26:26.852 ], 00:26:26.852 "core_count": 1 00:26:26.852 } 00:26:26.852 19:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:26.852 19:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:26.852 19:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:26.852 | .driver_specific 00:26:26.852 | .nvme_error 00:26:26.852 | .status_code 00:26:26.852 | .command_transient_transport_error' 00:26:26.852 19:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3793318 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3793318 ']' 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3793318 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3793318 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3793318' 00:26:27.112 killing process with pid 3793318 00:26:27.112 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3793318 00:26:27.112 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.112 00:26:27.112 Latency(us) 00:26:27.112 [2024-11-20T18:03:49.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.112 [2024-11-20T18:03:49.437Z] =================================================================================================================== 00:26:27.112 [2024-11-20T18:03:49.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.113 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3793318 00:26:27.113 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3791581 00:26:27.113 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3791581 ']' 00:26:27.113 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3791581 00:26:27.113 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:27.113 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.113 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3791581 00:26:27.373 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.373 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.373 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3791581' 00:26:27.373 killing process with pid 3791581 00:26:27.373 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3791581 00:26:27.373 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3791581 00:26:27.373 00:26:27.373 real 0m14.010s 00:26:27.373 user 0m26.579s 00:26:27.374 sys 0m4.680s 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.374 ************************************ 00:26:27.374 END TEST nvmf_digest_error 00:26:27.374 ************************************ 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.374 rmmod nvme_tcp 00:26:27.374 rmmod nvme_fabrics 00:26:27.374 rmmod nvme_keyring 00:26:27.374 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3791581 ']' 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3791581 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3791581 ']' 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3791581 00:26:27.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3791581) - No such process 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3791581 is not found' 00:26:27.633 Process with pid 3791581 is not found 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.633 19:03:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.537 00:26:29.537 real 0m36.434s 00:26:29.537 user 0m55.191s 00:26:29.537 sys 0m13.803s 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:29.537 ************************************ 00:26:29.537 END TEST nvmf_digest 00:26:29.537 ************************************ 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.537 ************************************ 00:26:29.537 START TEST nvmf_bdevperf 00:26:29.537 ************************************ 00:26:29.537 19:03:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.797 * Looking for test storage... 00:26:29.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.797 19:03:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:29.797 19:03:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:29.797 19:03:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:29.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.797 --rc genhtml_branch_coverage=1 00:26:29.797 --rc genhtml_function_coverage=1 00:26:29.797 --rc genhtml_legend=1 00:26:29.797 --rc geninfo_all_blocks=1 00:26:29.797 --rc geninfo_unexecuted_blocks=1 00:26:29.797 00:26:29.797 ' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:29.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.797 --rc genhtml_branch_coverage=1 00:26:29.797 --rc genhtml_function_coverage=1 00:26:29.797 --rc genhtml_legend=1 00:26:29.797 --rc geninfo_all_blocks=1 00:26:29.797 --rc geninfo_unexecuted_blocks=1 00:26:29.797 00:26:29.797 ' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:29.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.797 --rc genhtml_branch_coverage=1 00:26:29.797 --rc genhtml_function_coverage=1 00:26:29.797 --rc genhtml_legend=1 00:26:29.797 --rc geninfo_all_blocks=1 00:26:29.797 --rc geninfo_unexecuted_blocks=1 00:26:29.797 00:26:29.797 ' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:29.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.797 --rc genhtml_branch_coverage=1 00:26:29.797 --rc genhtml_function_coverage=1 00:26:29.797 --rc genhtml_legend=1 00:26:29.797 --rc geninfo_all_blocks=1 00:26:29.797 --rc geninfo_unexecuted_blocks=1 00:26:29.797 00:26:29.797 ' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.797 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.798 19:03:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:36.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:36.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:36.368 Found net devices under 0000:86:00.0: cvl_0_0 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.368 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:36.368 Found net devices under 0000:86:00.1: cvl_0_1 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:26:36.369 00:26:36.369 --- 10.0.0.2 ping statistics --- 00:26:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.369 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:26:36.369 00:26:36.369 --- 10.0.0.1 ping statistics --- 00:26:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.369 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.369 19:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3797323 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3797323 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3797323 ']' 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 [2024-11-20 19:03:58.087419] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:36.369 [2024-11-20 19:03:58.087466] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.369 [2024-11-20 19:03:58.168266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.369 [2024-11-20 19:03:58.210071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.369 [2024-11-20 19:03:58.210107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.369 [2024-11-20 19:03:58.210115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.369 [2024-11-20 19:03:58.210121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.369 [2024-11-20 19:03:58.210126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.369 [2024-11-20 19:03:58.211470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.369 [2024-11-20 19:03:58.211578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.369 [2024-11-20 19:03:58.211579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 [2024-11-20 19:03:58.349079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 Malloc0 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.369 [2024-11-20 19:03:58.411889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.369 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.369 { 00:26:36.369 "params": { 00:26:36.369 "name": "Nvme$subsystem", 00:26:36.369 "trtype": "$TEST_TRANSPORT", 00:26:36.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.369 "adrfam": "ipv4", 00:26:36.369 "trsvcid": "$NVMF_PORT", 00:26:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.370 "hdgst": ${hdgst:-false}, 00:26:36.370 "ddgst": ${ddgst:-false} 00:26:36.370 }, 00:26:36.370 "method": "bdev_nvme_attach_controller" 00:26:36.370 } 00:26:36.370 EOF 00:26:36.370 )") 00:26:36.370 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:36.370 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:36.370 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:36.370 19:03:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:36.370 "params": { 00:26:36.370 "name": "Nvme1", 00:26:36.370 "trtype": "tcp", 00:26:36.370 "traddr": "10.0.0.2", 00:26:36.370 "adrfam": "ipv4", 00:26:36.370 "trsvcid": "4420", 00:26:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.370 "hdgst": false, 00:26:36.370 "ddgst": false 00:26:36.370 }, 00:26:36.370 "method": "bdev_nvme_attach_controller" 00:26:36.370 }' 00:26:36.370 [2024-11-20 19:03:58.463621] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:36.370 [2024-11-20 19:03:58.463667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797380 ] 00:26:36.370 [2024-11-20 19:03:58.538482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.370 [2024-11-20 19:03:58.579663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.628 Running I/O for 1 seconds... 00:26:37.563 11334.00 IOPS, 44.27 MiB/s 00:26:37.563 Latency(us) 00:26:37.563 [2024-11-20T18:03:59.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.563 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.563 Verification LBA range: start 0x0 length 0x4000 00:26:37.563 Nvme1n1 : 1.01 11410.51 44.57 0.00 0.00 11174.37 2340.57 13232.03 00:26:37.563 [2024-11-20T18:03:59.888Z] =================================================================================================================== 00:26:37.563 [2024-11-20T18:03:59.888Z] Total : 11410.51 44.57 0.00 0.00 11174.37 2340.57 13232.03 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3797654 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:37.822 { 00:26:37.822 "params": { 00:26:37.822 "name": "Nvme$subsystem", 00:26:37.822 "trtype": "$TEST_TRANSPORT", 00:26:37.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.822 "adrfam": "ipv4", 00:26:37.822 "trsvcid": "$NVMF_PORT", 00:26:37.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.822 "hdgst": ${hdgst:-false}, 00:26:37.822 "ddgst": ${ddgst:-false} 00:26:37.822 }, 00:26:37.822 "method": "bdev_nvme_attach_controller" 00:26:37.822 } 00:26:37.822 EOF 00:26:37.822 )") 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:37.822 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:37.823 19:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:37.823 "params": { 00:26:37.823 "name": "Nvme1", 00:26:37.823 "trtype": "tcp", 00:26:37.823 "traddr": "10.0.0.2", 00:26:37.823 "adrfam": "ipv4", 00:26:37.823 "trsvcid": "4420", 00:26:37.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:37.823 "hdgst": false, 00:26:37.823 "ddgst": false 00:26:37.823 }, 00:26:37.823 "method": "bdev_nvme_attach_controller" 00:26:37.823 }' 00:26:37.823 [2024-11-20 19:03:59.965587] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:37.823 [2024-11-20 19:03:59.965641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797654 ] 00:26:37.823 [2024-11-20 19:04:00.044554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.823 [2024-11-20 19:04:00.090445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.082 Running I/O for 15 seconds... 00:26:40.395 11308.00 IOPS, 44.17 MiB/s [2024-11-20T18:04:02.982Z] 11458.50 IOPS, 44.76 MiB/s [2024-11-20T18:04:02.982Z] 19:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3797323 00:26:40.657 19:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:40.657 [2024-11-20 19:04:02.932602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.657 [2024-11-20 19:04:02.932643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.657 [2024-11-20 19:04:02.932662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.657 [2024-11-20 19:04:02.932671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.657 [2024-11-20 19:04:02.932680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.657 [2024-11-20 19:04:02.932689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.657 [2024-11-20 19:04:02.932698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.657 [2024-11-20 19:04:02.932707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.657 [2024-11-20 19:04:02.932716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.657 [2024-11-20 19:04:02.932724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.657 [2024-11-20 19:04:02.932736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.657 [2024-11-20 19:04:02.932747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.657 [2024-11-20 19:04:02.932756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.657 [2024-11-20 19:04:02.932763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.657 [2024-11-20 19:04:02.932773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.657 [2024-11-20 19:04:02.932782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.932990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.932997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.658 [2024-11-20 19:04:02.933403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.658 [2024-11-20 19:04:02.933410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.659 [2024-11-20 19:04:02.933859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.659 [2024-11-20 19:04:02.933865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.933991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.933998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.660 [2024-11-20 19:04:02.934357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.660 [2024-11-20 19:04:02.934364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.661 [2024-11-20 19:04:02.934728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.934735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8ae0 is same with the state(6) to be set 00:26:40.661 [2024-11-20 19:04:02.934743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:40.661 [2024-11-20 19:04:02.934748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:40.661 [2024-11-20 19:04:02.934754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111496 len:8 PRP1 0x0 PRP2 0x0 00:26:40.661 [2024-11-20 19:04:02.934766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.661 [2024-11-20 19:04:02.937553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.661 [2024-11-20 19:04:02.937607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.661 [2024-11-20 19:04:02.938086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.661 [2024-11-20 19:04:02.938129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.661 [2024-11-20 19:04:02.938154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.661 [2024-11-20 19:04:02.938762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.661 [2024-11-20 19:04:02.939041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.661 [2024-11-20 19:04:02.939052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.661 [2024-11-20 19:04:02.939060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.661 [2024-11-20 19:04:02.939067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.661 [2024-11-20 19:04:02.950843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.661 [2024-11-20 19:04:02.951198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.661 [2024-11-20 19:04:02.951262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.662 [2024-11-20 19:04:02.951285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.662 [2024-11-20 19:04:02.951868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.662 [2024-11-20 19:04:02.952112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.662 [2024-11-20 19:04:02.952121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.662 [2024-11-20 19:04:02.952128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.662 [2024-11-20 19:04:02.952136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.662 [2024-11-20 19:04:02.963722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.662 [2024-11-20 19:04:02.964125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.662 [2024-11-20 19:04:02.964143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.662 [2024-11-20 19:04:02.964151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.662 [2024-11-20 19:04:02.964317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.662 [2024-11-20 19:04:02.964479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.662 [2024-11-20 19:04:02.964488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.662 [2024-11-20 19:04:02.964495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.662 [2024-11-20 19:04:02.964502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.662 [2024-11-20 19:04:02.976708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.662 [2024-11-20 19:04:02.977152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.662 [2024-11-20 19:04:02.977171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.662 [2024-11-20 19:04:02.977179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.662 [2024-11-20 19:04:02.977355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.662 [2024-11-20 19:04:02.977524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.662 [2024-11-20 19:04:02.977535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.662 [2024-11-20 19:04:02.977546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.662 [2024-11-20 19:04:02.977553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:02.989782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:02.990091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:02.990110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:02.990118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:02.990298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.923 [2024-11-20 19:04:02.990473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.923 [2024-11-20 19:04:02.990483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.923 [2024-11-20 19:04:02.990490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.923 [2024-11-20 19:04:02.990496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:03.002722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:03.003089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:03.003108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:03.003115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:03.003290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.923 [2024-11-20 19:04:03.003460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.923 [2024-11-20 19:04:03.003470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.923 [2024-11-20 19:04:03.003476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.923 [2024-11-20 19:04:03.003483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:03.015668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:03.016043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:03.016061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:03.016070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:03.016245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.923 [2024-11-20 19:04:03.016414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.923 [2024-11-20 19:04:03.016423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.923 [2024-11-20 19:04:03.016430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.923 [2024-11-20 19:04:03.016437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:03.028616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:03.028948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:03.028966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:03.028973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:03.029141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.923 [2024-11-20 19:04:03.029318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.923 [2024-11-20 19:04:03.029329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.923 [2024-11-20 19:04:03.029335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.923 [2024-11-20 19:04:03.029342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:03.041557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:03.041897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:03.041914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:03.041922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:03.042090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.923 [2024-11-20 19:04:03.042266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.923 [2024-11-20 19:04:03.042277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.923 [2024-11-20 19:04:03.042283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.923 [2024-11-20 19:04:03.042290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:03.054509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:03.054859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:03.054877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:03.054885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:03.055053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.923 [2024-11-20 19:04:03.055228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.923 [2024-11-20 19:04:03.055238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.923 [2024-11-20 19:04:03.055245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.923 [2024-11-20 19:04:03.055252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:03.067395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:03.067805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:03.067826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:03.067835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:03.068003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.923 [2024-11-20 19:04:03.068175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.923 [2024-11-20 19:04:03.068184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.923 [2024-11-20 19:04:03.068191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.923 [2024-11-20 19:04:03.068197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.923 [2024-11-20 19:04:03.080297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.923 [2024-11-20 19:04:03.080672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.923 [2024-11-20 19:04:03.080690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.923 [2024-11-20 19:04:03.080698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.923 [2024-11-20 19:04:03.080867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.081036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.081046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.081053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.081059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.093237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.093633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.093651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.093658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.093826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.093994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.094003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.094010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.094016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.106188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.106641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.106659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.106666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.106835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.107007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.107017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.107023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.107030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.119194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.119616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.119634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.119641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.119809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.119979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.119989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.119996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.120002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.132175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.132584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.132602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.132610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.132777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.132945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.132955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.132961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.132967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.145134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.145543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.145561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.145569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.145736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.145906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.145916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.145927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.145934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.158100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.158533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.158551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.158559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.158728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.158899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.158908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.158915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.158921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.171083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.171489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.171507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.171514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.171683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.171853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.171863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.171869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.171875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.924 [2024-11-20 19:04:03.184027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.924 [2024-11-20 19:04:03.184476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.924 [2024-11-20 19:04:03.184494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.924 [2024-11-20 19:04:03.184501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.924 [2024-11-20 19:04:03.184669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.924 [2024-11-20 19:04:03.184838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.924 [2024-11-20 19:04:03.184848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.924 [2024-11-20 19:04:03.184854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.924 [2024-11-20 19:04:03.184861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.925 [2024-11-20 19:04:03.197233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.925 [2024-11-20 19:04:03.197616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.925 [2024-11-20 19:04:03.197635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.925 [2024-11-20 19:04:03.197643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.925 [2024-11-20 19:04:03.197817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.925 [2024-11-20 19:04:03.197992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.925 [2024-11-20 19:04:03.198002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.925 [2024-11-20 19:04:03.198008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.925 [2024-11-20 19:04:03.198015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.925 [2024-11-20 19:04:03.210264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.925 [2024-11-20 19:04:03.210668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.925 [2024-11-20 19:04:03.210686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.925 [2024-11-20 19:04:03.210694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.925 [2024-11-20 19:04:03.210866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.925 [2024-11-20 19:04:03.211040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.925 [2024-11-20 19:04:03.211049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.925 [2024-11-20 19:04:03.211067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.925 [2024-11-20 19:04:03.211074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.925 [2024-11-20 19:04:03.223212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.925 [2024-11-20 19:04:03.223612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.925 [2024-11-20 19:04:03.223628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.925 [2024-11-20 19:04:03.223636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.925 [2024-11-20 19:04:03.223795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.925 [2024-11-20 19:04:03.223955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.925 [2024-11-20 19:04:03.223965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.925 [2024-11-20 19:04:03.223971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.925 [2024-11-20 19:04:03.223977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.925 [2024-11-20 19:04:03.236360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.925 [2024-11-20 19:04:03.236793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.925 [2024-11-20 19:04:03.236811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:40.925 [2024-11-20 19:04:03.236822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:40.925 [2024-11-20 19:04:03.236991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:40.925 [2024-11-20 19:04:03.237162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.925 [2024-11-20 19:04:03.237172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.925 [2024-11-20 19:04:03.237179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.925 [2024-11-20 19:04:03.237185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.186 [2024-11-20 19:04:03.249472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.186 [2024-11-20 19:04:03.249902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-11-20 19:04:03.249920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.186 [2024-11-20 19:04:03.249928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.186 [2024-11-20 19:04:03.250096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.186 [2024-11-20 19:04:03.250273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.186 [2024-11-20 19:04:03.250284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.186 [2024-11-20 19:04:03.250291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.186 [2024-11-20 19:04:03.250298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.186 [2024-11-20 19:04:03.262452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.186 [2024-11-20 19:04:03.262887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-11-20 19:04:03.262906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.186 [2024-11-20 19:04:03.262914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.186 [2024-11-20 19:04:03.263082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.186 [2024-11-20 19:04:03.263258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.186 [2024-11-20 19:04:03.263268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.186 [2024-11-20 19:04:03.263274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.186 [2024-11-20 19:04:03.263282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.186 [2024-11-20 19:04:03.275336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.186 [2024-11-20 19:04:03.275763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-11-20 19:04:03.275781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.186 [2024-11-20 19:04:03.275789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.186 [2024-11-20 19:04:03.275956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.186 [2024-11-20 19:04:03.276130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.186 [2024-11-20 19:04:03.276140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.186 [2024-11-20 19:04:03.276147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.186 [2024-11-20 19:04:03.276153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.186 [2024-11-20 19:04:03.288306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.186 [2024-11-20 19:04:03.288730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-11-20 19:04:03.288748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.186 [2024-11-20 19:04:03.288756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.186 [2024-11-20 19:04:03.288924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.186 [2024-11-20 19:04:03.289094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.186 [2024-11-20 19:04:03.289104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.186 [2024-11-20 19:04:03.289111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.186 [2024-11-20 19:04:03.289118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.186 [2024-11-20 19:04:03.301185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.186 [2024-11-20 19:04:03.301626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-11-20 19:04:03.301643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.186 [2024-11-20 19:04:03.301651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.186 [2024-11-20 19:04:03.301820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.186 [2024-11-20 19:04:03.301989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.186 [2024-11-20 19:04:03.301999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.302006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.302012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 10064.67 IOPS, 39.32 MiB/s [2024-11-20T18:04:03.512Z] [2024-11-20 19:04:03.314131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.314561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.314580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.314588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.314756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.314926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.314935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.314946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.314952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 [2024-11-20 19:04:03.327104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.327533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.327551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.327558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.327726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.327896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.327906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.327913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.327919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 [2024-11-20 19:04:03.340074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.340489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.340506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.340515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.340683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.340852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.340862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.340868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.340875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 [2024-11-20 19:04:03.353029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.353452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.353470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.353477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.353645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.353814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.353824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.353830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.353837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 [2024-11-20 19:04:03.366016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.366439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.366456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.366465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.366633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.366802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.366812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.366819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.366825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 [2024-11-20 19:04:03.378992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.379341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.379359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.379367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.379535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.379706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.379715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.379722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.379729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 [2024-11-20 19:04:03.391881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.392299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.392316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.392325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.392494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.392663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.392673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.392679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.392686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.187 [2024-11-20 19:04:03.404836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.187 [2024-11-20 19:04:03.405223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.187 [2024-11-20 19:04:03.405241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.187 [2024-11-20 19:04:03.405252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.187 [2024-11-20 19:04:03.405421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.187 [2024-11-20 19:04:03.405591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.187 [2024-11-20 19:04:03.405600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.187 [2024-11-20 19:04:03.405607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.187 [2024-11-20 19:04:03.405613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.417773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.418190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.418213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.418221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.418390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.418561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.418571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.418577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.418583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.430751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.431175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.431193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.431207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.431376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.431544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.431554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.431561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.431567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.443727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.444173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.444190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.444198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.444392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.444573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.444583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.444590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.444596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.456876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.457220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.457239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.457247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.457414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.457584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.457594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.457601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.457608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.469811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.470157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.470174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.470182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.470384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.470558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.470568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.470574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.470581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.482728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.483088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.483107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.483117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.483291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.483462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.483472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.483483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.483490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.495654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.496076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.496094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.496102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.496276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.496445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.496455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.496461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.496467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.188 [2024-11-20 19:04:03.508770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.188 [2024-11-20 19:04:03.509197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.188 [2024-11-20 19:04:03.509219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.188 [2024-11-20 19:04:03.509227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.188 [2024-11-20 19:04:03.509411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.188 [2024-11-20 19:04:03.509600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.188 [2024-11-20 19:04:03.509610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.188 [2024-11-20 19:04:03.509616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.188 [2024-11-20 19:04:03.509623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.449 [2024-11-20 19:04:03.521806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.449 [2024-11-20 19:04:03.522231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.449 [2024-11-20 19:04:03.522250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.449 [2024-11-20 19:04:03.522258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.449 [2024-11-20 19:04:03.522426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.449 [2024-11-20 19:04:03.522597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.449 [2024-11-20 19:04:03.522607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.449 [2024-11-20 19:04:03.522613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.449 [2024-11-20 19:04:03.522619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.449 [2024-11-20 19:04:03.534779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.449 [2024-11-20 19:04:03.535177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.449 [2024-11-20 19:04:03.535193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.449 [2024-11-20 19:04:03.535207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.535376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.535545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.535555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.535561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.535567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.547723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.548146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.548163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.548171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.548345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.548516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.548525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.548532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.548538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.560691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.561118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.561136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.561143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.561318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.561487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.561498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.561504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.561510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.573660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.574048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.574065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.574076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.574258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.574429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.574438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.574445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.574451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.586584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.586986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.587004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.587011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.587180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.587357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.587367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.587374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.587380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.599529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.599951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.599969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.599976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.600145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.600321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.600331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.600338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.600344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.612507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.612934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.612951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.612959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.613127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.613308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.613319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.613325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.613332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.625485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.625907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.625925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.625933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.626101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.450 [2024-11-20 19:04:03.626277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.450 [2024-11-20 19:04:03.626287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.450 [2024-11-20 19:04:03.626294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.450 [2024-11-20 19:04:03.626300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.450 [2024-11-20 19:04:03.638452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.450 [2024-11-20 19:04:03.638870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.450 [2024-11-20 19:04:03.638887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.450 [2024-11-20 19:04:03.638895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.450 [2024-11-20 19:04:03.639063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.639238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.639248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.639255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.639262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.651414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.651771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.451 [2024-11-20 19:04:03.651788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.451 [2024-11-20 19:04:03.651795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.451 [2024-11-20 19:04:03.651963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.652132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.652142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.652153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.652160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.664369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.664770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.451 [2024-11-20 19:04:03.664787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.451 [2024-11-20 19:04:03.664795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.451 [2024-11-20 19:04:03.664963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.665133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.665143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.665150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.665156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.677320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.677743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.451 [2024-11-20 19:04:03.677761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.451 [2024-11-20 19:04:03.677769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.451 [2024-11-20 19:04:03.677935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.678105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.678115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.678122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.678128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.690307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.690667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.451 [2024-11-20 19:04:03.690685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.451 [2024-11-20 19:04:03.690692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.451 [2024-11-20 19:04:03.690859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.691028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.691038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.691045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.691051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.703207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.703587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.451 [2024-11-20 19:04:03.703605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.451 [2024-11-20 19:04:03.703613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.451 [2024-11-20 19:04:03.703781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.703950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.703960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.703967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.703973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.716235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.716661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.451 [2024-11-20 19:04:03.716678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.451 [2024-11-20 19:04:03.716686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.451 [2024-11-20 19:04:03.716853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.717024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.717033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.717040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.717046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.729237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.729641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.451 [2024-11-20 19:04:03.729659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.451 [2024-11-20 19:04:03.729667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.451 [2024-11-20 19:04:03.729835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.451 [2024-11-20 19:04:03.730006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.451 [2024-11-20 19:04:03.730015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.451 [2024-11-20 19:04:03.730022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.451 [2024-11-20 19:04:03.730028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.451 [2024-11-20 19:04:03.742192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.451 [2024-11-20 19:04:03.742553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.452 [2024-11-20 19:04:03.742570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.452 [2024-11-20 19:04:03.742581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.452 [2024-11-20 19:04:03.742749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.452 [2024-11-20 19:04:03.742918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.452 [2024-11-20 19:04:03.742928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.452 [2024-11-20 19:04:03.742934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.452 [2024-11-20 19:04:03.742940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.452 [2024-11-20 19:04:03.755105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.452 [2024-11-20 19:04:03.755533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.452 [2024-11-20 19:04:03.755551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.452 [2024-11-20 19:04:03.755559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.452 [2024-11-20 19:04:03.755728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.452 [2024-11-20 19:04:03.755897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.452 [2024-11-20 19:04:03.755908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.452 [2024-11-20 19:04:03.755914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.452 [2024-11-20 19:04:03.755921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.452 [2024-11-20 19:04:03.768053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.452 [2024-11-20 19:04:03.768452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.452 [2024-11-20 19:04:03.768470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.452 [2024-11-20 19:04:03.768478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.452 [2024-11-20 19:04:03.768651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.452 [2024-11-20 19:04:03.768825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.452 [2024-11-20 19:04:03.768835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.452 [2024-11-20 19:04:03.768842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.452 [2024-11-20 19:04:03.768848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.712 [2024-11-20 19:04:03.781163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.712 [2024-11-20 19:04:03.781528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.712 [2024-11-20 19:04:03.781546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.712 [2024-11-20 19:04:03.781555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.712 [2024-11-20 19:04:03.781728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.712 [2024-11-20 19:04:03.781901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.712 [2024-11-20 19:04:03.781915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.712 [2024-11-20 19:04:03.781921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.712 [2024-11-20 19:04:03.781928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.712 [2024-11-20 19:04:03.794135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.712 [2024-11-20 19:04:03.794564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.712 [2024-11-20 19:04:03.794583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.712 [2024-11-20 19:04:03.794591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.712 [2024-11-20 19:04:03.794759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.712 [2024-11-20 19:04:03.794929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.712 [2024-11-20 19:04:03.794939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.712 [2024-11-20 19:04:03.794946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.794952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.807107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.807535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.807553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.713 [2024-11-20 19:04:03.807562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.713 [2024-11-20 19:04:03.807729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.713 [2024-11-20 19:04:03.807898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.713 [2024-11-20 19:04:03.807907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.713 [2024-11-20 19:04:03.807914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.807921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.820069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.820491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.820509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.713 [2024-11-20 19:04:03.820517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.713 [2024-11-20 19:04:03.820685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.713 [2024-11-20 19:04:03.820856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.713 [2024-11-20 19:04:03.820865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.713 [2024-11-20 19:04:03.820872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.820881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.833121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.833539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.833558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.713 [2024-11-20 19:04:03.833567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.713 [2024-11-20 19:04:03.833734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.713 [2024-11-20 19:04:03.833904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.713 [2024-11-20 19:04:03.833914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.713 [2024-11-20 19:04:03.833920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.833927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.846100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.846529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.846547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.713 [2024-11-20 19:04:03.846555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.713 [2024-11-20 19:04:03.846723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.713 [2024-11-20 19:04:03.846894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.713 [2024-11-20 19:04:03.846904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.713 [2024-11-20 19:04:03.846911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.846917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.858982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.859318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.859336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.713 [2024-11-20 19:04:03.859345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.713 [2024-11-20 19:04:03.859514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.713 [2024-11-20 19:04:03.859683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.713 [2024-11-20 19:04:03.859693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.713 [2024-11-20 19:04:03.859699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.859706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.871867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.872294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.872312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.713 [2024-11-20 19:04:03.872320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.713 [2024-11-20 19:04:03.872488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.713 [2024-11-20 19:04:03.872658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.713 [2024-11-20 19:04:03.872669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.713 [2024-11-20 19:04:03.872676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.872682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.884836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.885158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.885176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.713 [2024-11-20 19:04:03.885184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.713 [2024-11-20 19:04:03.885400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.713 [2024-11-20 19:04:03.885577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.713 [2024-11-20 19:04:03.885587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.713 [2024-11-20 19:04:03.885593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.713 [2024-11-20 19:04:03.885600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.713 [2024-11-20 19:04:03.897848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.713 [2024-11-20 19:04:03.898252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.713 [2024-11-20 19:04:03.898270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.898278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.898446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.898616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.898626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.898632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.898639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:03.910853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:03.911200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:03.911222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.911230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.911402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.911572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.911582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.911589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.911595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:03.923764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:03.924173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:03.924191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.924198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.924371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.924540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.924550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.924556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.924563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:03.936700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:03.937123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:03.937140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.937148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.937321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.937491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.937500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.937507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.937513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:03.949673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:03.950068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:03.950085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.950092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.950265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.950436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.950449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.950456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.950462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:03.962619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:03.963064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:03.963081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.963089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.963264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.963433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.963443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.963450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.963457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:03.975719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:03.976049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:03.976067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.976075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.976248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.976418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.976428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.976434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.976441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:03.988700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:03.989116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:03.989133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:03.989141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:03.989315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:03.989485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:03.989495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:03.989501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.714 [2024-11-20 19:04:03.989512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.714 [2024-11-20 19:04:04.001700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.714 [2024-11-20 19:04:04.002120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.714 [2024-11-20 19:04:04.002138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.714 [2024-11-20 19:04:04.002146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.714 [2024-11-20 19:04:04.002320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.714 [2024-11-20 19:04:04.002489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.714 [2024-11-20 19:04:04.002499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.714 [2024-11-20 19:04:04.002506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.715 [2024-11-20 19:04:04.002512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.715 [2024-11-20 19:04:04.014663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.715 [2024-11-20 19:04:04.015080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.715 [2024-11-20 19:04:04.015098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.715 [2024-11-20 19:04:04.015106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.715 [2024-11-20 19:04:04.015279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.715 [2024-11-20 19:04:04.015448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.715 [2024-11-20 19:04:04.015458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.715 [2024-11-20 19:04:04.015464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.715 [2024-11-20 19:04:04.015471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.715 [2024-11-20 19:04:04.027623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.715 [2024-11-20 19:04:04.028063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.715 [2024-11-20 19:04:04.028081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.715 [2024-11-20 19:04:04.028089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.715 [2024-11-20 19:04:04.028264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.715 [2024-11-20 19:04:04.028433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.715 [2024-11-20 19:04:04.028442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.715 [2024-11-20 19:04:04.028449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.715 [2024-11-20 19:04:04.028456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.975 [2024-11-20 19:04:04.040649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.975 [2024-11-20 19:04:04.041059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.975 [2024-11-20 19:04:04.041075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.975 [2024-11-20 19:04:04.041083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.975 [2024-11-20 19:04:04.041257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.975 [2024-11-20 19:04:04.041443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.975 [2024-11-20 19:04:04.041453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.975 [2024-11-20 19:04:04.041459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.975 [2024-11-20 19:04:04.041466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.975 [2024-11-20 19:04:04.053589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.975 [2024-11-20 19:04:04.054037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.975 [2024-11-20 19:04:04.054055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.975 [2024-11-20 19:04:04.054064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.975 [2024-11-20 19:04:04.054237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.975 [2024-11-20 19:04:04.054407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.975 [2024-11-20 19:04:04.054417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.975 [2024-11-20 19:04:04.054423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.975 [2024-11-20 19:04:04.054430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.975 [2024-11-20 19:04:04.066612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.975 [2024-11-20 19:04:04.067045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.067063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.067070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.067243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.067413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.067423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.067429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.067436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.079498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.079906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.079924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.079931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.080105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.080281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.080291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.080298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.080306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.092539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.092893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.092911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.092919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.093087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.093262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.093272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.093278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.093286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.105492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.105915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.105933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.105941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.106109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.106284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.106294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.106301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.106307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.118537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.118973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.118991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.118999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.119172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.119353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.119367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.119374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.119382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.131482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.131810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.131828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.131836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.132003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.132174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.132184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.132190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.132196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.144372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.144727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.144745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.144753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.144921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.145091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.145100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.145107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.145114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.157304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.157704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.157723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.157731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.976 [2024-11-20 19:04:04.157899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.976 [2024-11-20 19:04:04.158070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.976 [2024-11-20 19:04:04.158080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.976 [2024-11-20 19:04:04.158086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.976 [2024-11-20 19:04:04.158096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.976 [2024-11-20 19:04:04.170334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.976 [2024-11-20 19:04:04.170667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.976 [2024-11-20 19:04:04.170685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.976 [2024-11-20 19:04:04.170693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.170861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.171031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.171042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.171048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.171054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.183260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.977 [2024-11-20 19:04:04.183620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.977 [2024-11-20 19:04:04.183637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.977 [2024-11-20 19:04:04.183645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.183814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.183983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.183993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.183999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.184005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.196142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.977 [2024-11-20 19:04:04.196484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.977 [2024-11-20 19:04:04.196501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.977 [2024-11-20 19:04:04.196508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.196666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.196827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.196837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.196843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.196850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.208945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.977 [2024-11-20 19:04:04.209379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.977 [2024-11-20 19:04:04.209434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.977 [2024-11-20 19:04:04.209459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.209994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.210164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.210174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.210181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.210187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.221824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.977 [2024-11-20 19:04:04.222271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.977 [2024-11-20 19:04:04.222289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.977 [2024-11-20 19:04:04.222297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.222465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.222634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.222643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.222650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.222656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.234969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.977 [2024-11-20 19:04:04.235351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.977 [2024-11-20 19:04:04.235399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.977 [2024-11-20 19:04:04.235423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.236107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.236288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.236299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.236306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.236313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.247765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.977 [2024-11-20 19:04:04.248234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.977 [2024-11-20 19:04:04.248283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.977 [2024-11-20 19:04:04.248307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.248900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.249349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.249359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.249366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.249372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.260681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.977 [2024-11-20 19:04:04.261084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.977 [2024-11-20 19:04:04.261101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.977 [2024-11-20 19:04:04.261109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.977 [2024-11-20 19:04:04.261284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.977 [2024-11-20 19:04:04.261454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.977 [2024-11-20 19:04:04.261474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.977 [2024-11-20 19:04:04.261481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.977 [2024-11-20 19:04:04.261488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.977 [2024-11-20 19:04:04.273472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.978 [2024-11-20 19:04:04.273800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.978 [2024-11-20 19:04:04.273818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.978 [2024-11-20 19:04:04.273825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.978 [2024-11-20 19:04:04.273983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.978 [2024-11-20 19:04:04.274143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.978 [2024-11-20 19:04:04.274153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.978 [2024-11-20 19:04:04.274159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.978 [2024-11-20 19:04:04.274165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.978 [2024-11-20 19:04:04.286304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.978 [2024-11-20 19:04:04.286653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.978 [2024-11-20 19:04:04.286671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:41.978 [2024-11-20 19:04:04.286679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:41.978 [2024-11-20 19:04:04.286847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:41.978 [2024-11-20 19:04:04.287017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.978 [2024-11-20 19:04:04.287029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.978 [2024-11-20 19:04:04.287036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.978 [2024-11-20 19:04:04.287043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.978 [2024-11-20 19:04:04.299452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.238 [2024-11-20 19:04:04.299790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.238 [2024-11-20 19:04:04.299808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.238 [2024-11-20 19:04:04.299817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.238 [2024-11-20 19:04:04.299991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.238 [2024-11-20 19:04:04.300166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.300176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.300183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.300189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 7548.50 IOPS, 29.49 MiB/s [2024-11-20T18:04:04.564Z] [2024-11-20 19:04:04.312396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.312777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.312795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.312803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.312971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.313141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.313152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.313158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.313165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 [2024-11-20 19:04:04.325284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.325574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.325590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.325598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.325765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.325933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.325943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.325950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.325956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 [2024-11-20 19:04:04.338165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.338443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.338460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.338468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.338627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.338787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.338797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.338803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.338810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 [2024-11-20 19:04:04.351121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.351526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.351573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.351597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.352179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.352627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.352638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.352644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.352651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 [2024-11-20 19:04:04.364028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.364336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.364354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.364361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.364520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.364680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.364689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.364696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.364702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 [2024-11-20 19:04:04.376837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.377263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.377286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.377294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.377462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.377633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.377643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.377649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.377656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 [2024-11-20 19:04:04.389627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.390025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.390042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.390050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.390216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.390401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.390411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.390418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.239 [2024-11-20 19:04:04.390424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.239 [2024-11-20 19:04:04.402550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.239 [2024-11-20 19:04:04.402955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.239 [2024-11-20 19:04:04.402973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.239 [2024-11-20 19:04:04.402980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.239 [2024-11-20 19:04:04.403140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.239 [2024-11-20 19:04:04.403326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.239 [2024-11-20 19:04:04.403337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.239 [2024-11-20 19:04:04.403344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.403350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.415470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.415798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.415843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.415867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.416377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.416540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.416549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.416556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.416562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.428412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.428696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.428740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.428763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.429359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.429579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.429589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.429595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.429601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.441165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.441519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.441536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.441544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.441712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.441881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.441891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.441897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.441904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.453953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.454309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.454355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.454379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.454621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.454791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.454800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.454812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.454819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.466837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.467231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.467248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.467256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.467416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.467575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.467584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.467591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.467597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.479810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.480225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.480243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.480250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.480409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.480569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.480578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.480584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.480591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.492903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.493296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.493314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.493321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.493495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.493678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.493688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.493694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.493701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.505759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.506176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.506193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.240 [2024-11-20 19:04:04.506200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.240 [2024-11-20 19:04:04.506388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.240 [2024-11-20 19:04:04.506561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.240 [2024-11-20 19:04:04.506570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.240 [2024-11-20 19:04:04.506576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.240 [2024-11-20 19:04:04.506582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.240 [2024-11-20 19:04:04.518599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.240 [2024-11-20 19:04:04.518985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.240 [2024-11-20 19:04:04.519002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.241 [2024-11-20 19:04:04.519009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.241 [2024-11-20 19:04:04.519168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.241 [2024-11-20 19:04:04.519357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.241 [2024-11-20 19:04:04.519367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.241 [2024-11-20 19:04:04.519374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.241 [2024-11-20 19:04:04.519381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.241 [2024-11-20 19:04:04.531445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.241 [2024-11-20 19:04:04.531886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.241 [2024-11-20 19:04:04.531903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.241 [2024-11-20 19:04:04.531910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.241 [2024-11-20 19:04:04.532069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.241 [2024-11-20 19:04:04.532233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.241 [2024-11-20 19:04:04.532243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.241 [2024-11-20 19:04:04.532250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.241 [2024-11-20 19:04:04.532257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.241 [2024-11-20 19:04:04.544340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.241 [2024-11-20 19:04:04.544772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.241 [2024-11-20 19:04:04.544825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.241 [2024-11-20 19:04:04.544850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.241 [2024-11-20 19:04:04.545445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.241 [2024-11-20 19:04:04.546014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.241 [2024-11-20 19:04:04.546024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.241 [2024-11-20 19:04:04.546030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.241 [2024-11-20 19:04:04.546037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.241 [2024-11-20 19:04:04.557127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.241 [2024-11-20 19:04:04.557540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.241 [2024-11-20 19:04:04.557557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.241 [2024-11-20 19:04:04.557565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.241 [2024-11-20 19:04:04.557724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.241 [2024-11-20 19:04:04.557910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.241 [2024-11-20 19:04:04.557920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.241 [2024-11-20 19:04:04.557927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.241 [2024-11-20 19:04:04.557933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.501 [2024-11-20 19:04:04.570204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.501 [2024-11-20 19:04:04.570636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.501 [2024-11-20 19:04:04.570680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.501 [2024-11-20 19:04:04.570704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.501 [2024-11-20 19:04:04.571180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.501 [2024-11-20 19:04:04.571370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.501 [2024-11-20 19:04:04.571378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.501 [2024-11-20 19:04:04.571386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.501 [2024-11-20 19:04:04.571392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.501 [2024-11-20 19:04:04.582995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.501 [2024-11-20 19:04:04.583415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.501 [2024-11-20 19:04:04.583433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.501 [2024-11-20 19:04:04.583440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.501 [2024-11-20 19:04:04.583599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.501 [2024-11-20 19:04:04.583762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.501 [2024-11-20 19:04:04.583772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.501 [2024-11-20 19:04:04.583778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.501 [2024-11-20 19:04:04.583784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.501 [2024-11-20 19:04:04.595775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.501 [2024-11-20 19:04:04.596191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.501 [2024-11-20 19:04:04.596213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.501 [2024-11-20 19:04:04.596221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.501 [2024-11-20 19:04:04.596381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.501 [2024-11-20 19:04:04.596540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.596549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.596555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.596562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.608596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.609011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.609028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.609036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.609195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.609384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.609394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.609400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.609407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.621459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.621878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.621924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.621948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.622544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.623141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.623151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.623160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.623167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.634188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.634602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.634619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.634626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.634785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.634945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.634954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.634961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.634967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.646987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.647404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.647450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.647475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.648057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.648240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.648249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.648271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.648277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.659746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.660101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.660145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.660169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.660768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.661253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.661262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.661269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.661276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.672592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.673025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.673070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.673094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.673692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.674280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.674290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.674296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.674303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.685399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.685808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.685826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.685834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.685993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.686153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.502 [2024-11-20 19:04:04.686162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.502 [2024-11-20 19:04:04.686169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.502 [2024-11-20 19:04:04.686176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.502 [2024-11-20 19:04:04.698219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.502 [2024-11-20 19:04:04.698632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.502 [2024-11-20 19:04:04.698649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.502 [2024-11-20 19:04:04.698657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.502 [2024-11-20 19:04:04.698816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.502 [2024-11-20 19:04:04.698976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.698985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.698991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.698997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.710940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.711351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.711381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.711541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.711702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.711711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.711718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.711724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.723735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.724167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.724224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.724250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.724832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.725358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.725367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.725374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.725381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.736491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.736927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.736945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.736952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.737111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.737296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.737306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.737313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.737320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.749622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.750005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.750023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.750031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.750209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.750387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.750397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.750404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.750410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.762553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.762934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.762979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.763003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.763528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.763699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.763709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.763715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.763722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.775414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.775839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.775886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.775909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.776389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.776560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.776571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.776577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.776584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.788232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.788650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.788704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.788728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.789308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.789479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.789489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.789499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.789506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.503 [2024-11-20 19:04:04.801019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.503 [2024-11-20 19:04:04.801451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.503 [2024-11-20 19:04:04.801497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-11-20 19:04:04.801522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.503 [2024-11-20 19:04:04.802104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.503 [2024-11-20 19:04:04.802590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.503 [2024-11-20 19:04:04.802601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.503 [2024-11-20 19:04:04.802608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.503 [2024-11-20 19:04:04.802614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.504 [2024-11-20 19:04:04.813872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.504 [2024-11-20 19:04:04.814279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.504 [2024-11-20 19:04:04.814326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.504 [2024-11-20 19:04:04.814350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.504 [2024-11-20 19:04:04.814852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.504 [2024-11-20 19:04:04.815013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.504 [2024-11-20 19:04:04.815021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.504 [2024-11-20 19:04:04.815027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.504 [2024-11-20 19:04:04.815033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.764 [2024-11-20 19:04:04.826965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.764 [2024-11-20 19:04:04.827388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-11-20 19:04:04.827407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.764 [2024-11-20 19:04:04.827415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.764 [2024-11-20 19:04:04.827575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.764 [2024-11-20 19:04:04.827735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.764 [2024-11-20 19:04:04.827745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.764 [2024-11-20 19:04:04.827751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.764 [2024-11-20 19:04:04.827757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.764 [2024-11-20 19:04:04.839858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.764 [2024-11-20 19:04:04.840275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-11-20 19:04:04.840320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.764 [2024-11-20 19:04:04.840344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.764 [2024-11-20 19:04:04.840770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.764 [2024-11-20 19:04:04.840930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.840938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.840944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.840950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.854925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.855391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-11-20 19:04:04.855415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.765 [2024-11-20 19:04:04.855426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.765 [2024-11-20 19:04:04.855681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.765 [2024-11-20 19:04:04.855938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.855950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.855960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.855969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.867881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.868249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-11-20 19:04:04.868297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.765 [2024-11-20 19:04:04.868322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.765 [2024-11-20 19:04:04.868794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.765 [2024-11-20 19:04:04.868962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.868971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.868977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.868983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.880639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.881049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-11-20 19:04:04.881090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.765 [2024-11-20 19:04:04.881123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.765 [2024-11-20 19:04:04.881728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.765 [2024-11-20 19:04:04.882120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.882139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.882154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.882168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.895288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.895822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-11-20 19:04:04.895868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.765 [2024-11-20 19:04:04.895891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.765 [2024-11-20 19:04:04.896489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.765 [2024-11-20 19:04:04.896961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.896974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.896984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.896994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.908275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.908633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-11-20 19:04:04.908650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.765 [2024-11-20 19:04:04.908657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.765 [2024-11-20 19:04:04.908826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.765 [2024-11-20 19:04:04.908994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.909004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.909011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.909017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.921101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.921531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-11-20 19:04:04.921549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.765 [2024-11-20 19:04:04.921557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.765 [2024-11-20 19:04:04.921716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.765 [2024-11-20 19:04:04.921878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.921888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.921894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.921900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.933937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.934355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-11-20 19:04:04.934373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.765 [2024-11-20 19:04:04.934382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.765 [2024-11-20 19:04:04.934542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.765 [2024-11-20 19:04:04.934702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.765 [2024-11-20 19:04:04.934712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.765 [2024-11-20 19:04:04.934718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.765 [2024-11-20 19:04:04.934725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.765 [2024-11-20 19:04:04.946718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.765 [2024-11-20 19:04:04.947088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:04.947104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:04.947112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:04.947294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:04.947463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:04.947473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:04.947480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:04.947486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.766 [2024-11-20 19:04:04.959726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.766 [2024-11-20 19:04:04.960154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:04.960172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:04.960179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:04.960354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:04.960523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:04.960533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:04.960543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:04.960550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.766 [2024-11-20 19:04:04.972694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.766 [2024-11-20 19:04:04.973119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:04.973165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:04.973190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:04.973749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:04.973920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:04.973930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:04.973936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:04.973943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.766 [2024-11-20 19:04:04.985503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.766 [2024-11-20 19:04:04.985921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:04.985938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:04.985946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:04.986105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:04.986288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:04.986298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:04.986305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:04.986312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.766 [2024-11-20 19:04:04.998298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.766 [2024-11-20 19:04:04.998665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:04.998682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:04.998691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:04.998860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:04.999030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:04.999040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:04.999047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:04.999054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.766 [2024-11-20 19:04:05.011346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.766 [2024-11-20 19:04:05.011792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:05.011809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:05.011817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:05.011991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:05.012165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:05.012174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:05.012181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:05.012188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.766 [2024-11-20 19:04:05.024230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.766 [2024-11-20 19:04:05.024570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:05.024587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:05.024594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:05.024753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:05.024913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:05.024923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:05.024929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:05.024935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.766 [2024-11-20 19:04:05.037064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.766 [2024-11-20 19:04:05.037476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-11-20 19:04:05.037516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.766 [2024-11-20 19:04:05.037542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.766 [2024-11-20 19:04:05.038067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.766 [2024-11-20 19:04:05.038251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.766 [2024-11-20 19:04:05.038261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.766 [2024-11-20 19:04:05.038267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.766 [2024-11-20 19:04:05.038274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.767 [2024-11-20 19:04:05.049918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.767 [2024-11-20 19:04:05.050329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-11-20 19:04:05.050367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.767 [2024-11-20 19:04:05.050402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.767 [2024-11-20 19:04:05.050941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.767 [2024-11-20 19:04:05.051111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.767 [2024-11-20 19:04:05.051119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.767 [2024-11-20 19:04:05.051125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.767 [2024-11-20 19:04:05.051131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.767 [2024-11-20 19:04:05.062666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.767 [2024-11-20 19:04:05.063064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-11-20 19:04:05.063081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.767 [2024-11-20 19:04:05.063089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.767 [2024-11-20 19:04:05.063272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.767 [2024-11-20 19:04:05.063441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.767 [2024-11-20 19:04:05.063450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.767 [2024-11-20 19:04:05.063457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.767 [2024-11-20 19:04:05.063464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.767 [2024-11-20 19:04:05.075414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.767 [2024-11-20 19:04:05.075842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-11-20 19:04:05.075887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:42.767 [2024-11-20 19:04:05.075911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:42.767 [2024-11-20 19:04:05.076419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:42.767 [2024-11-20 19:04:05.076589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.767 [2024-11-20 19:04:05.076599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.767 [2024-11-20 19:04:05.076605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.767 [2024-11-20 19:04:05.076612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.767 [2024-11-20 19:04:05.088351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.088791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.088809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.088817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.088989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.089168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.089178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.089184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.089191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.028 [2024-11-20 19:04:05.101267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.101616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.101633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.101640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.101798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.101958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.101967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.101973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.101980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.028 [2024-11-20 19:04:05.114107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.114533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.114579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.114603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.115074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.115240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.115249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.115272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.115279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.028 [2024-11-20 19:04:05.126940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.127332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.127349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.127356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.127516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.127677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.127686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.127692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.127701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.028 [2024-11-20 19:04:05.139759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.140195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.140254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.140278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.140859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.141440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.141450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.141456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.141463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.028 [2024-11-20 19:04:05.152535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.152883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.152900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.152908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.153068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.153250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.153260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.153267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.153273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.028 [2024-11-20 19:04:05.165393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.165763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.165780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.165788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.165947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.166107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.166116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.166122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.166129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.028 [2024-11-20 19:04:05.178479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.028 [2024-11-20 19:04:05.178901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.028 [2024-11-20 19:04:05.178918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.028 [2024-11-20 19:04:05.178925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.028 [2024-11-20 19:04:05.179094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.028 [2024-11-20 19:04:05.179269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.028 [2024-11-20 19:04:05.179279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.028 [2024-11-20 19:04:05.179286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.028 [2024-11-20 19:04:05.179293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.191517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.191871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.191889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.191897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.192073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.192256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.192267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.192274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.192281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.204490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.204896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.204914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.204921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.205094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.205275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.205284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.205291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.205297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.217492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.217892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.217908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.217916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.218091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.218272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.218281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.218288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.218294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.230482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.230890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.230907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.230914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.231087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.231267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.231277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.231283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.231290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.243573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.243993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.244011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.244018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.244198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.244405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.244413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.244420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.244426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.256606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.257022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.257040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.257047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.257220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.257409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.257420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.257426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.257432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.269678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.270134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.270152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.270159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.270338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.270521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.270530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.270536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.270542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.282704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.283053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.283070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.283078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.029 [2024-11-20 19:04:05.283260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.029 [2024-11-20 19:04:05.283433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.029 [2024-11-20 19:04:05.283442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.029 [2024-11-20 19:04:05.283448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.029 [2024-11-20 19:04:05.283454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.029 [2024-11-20 19:04:05.295652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.029 [2024-11-20 19:04:05.296071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.029 [2024-11-20 19:04:05.296087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.029 [2024-11-20 19:04:05.296094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.030 [2024-11-20 19:04:05.296285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.030 [2024-11-20 19:04:05.296458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.030 [2024-11-20 19:04:05.296467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.030 [2024-11-20 19:04:05.296473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.030 [2024-11-20 19:04:05.296482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.030 [2024-11-20 19:04:05.309889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.030 6038.80 IOPS, 23.59 MiB/s [2024-11-20T18:04:05.355Z] [2024-11-20 19:04:05.310298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.030 [2024-11-20 19:04:05.310315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.030 [2024-11-20 19:04:05.310322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.030 [2024-11-20 19:04:05.310489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.030 [2024-11-20 19:04:05.310657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.030 [2024-11-20 19:04:05.310665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.030 [2024-11-20 19:04:05.310671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.030 [2024-11-20 19:04:05.310677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.030 [2024-11-20 19:04:05.322912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.030 [2024-11-20 19:04:05.323314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.030 [2024-11-20 19:04:05.323332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.030 [2024-11-20 19:04:05.323339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.030 [2024-11-20 19:04:05.323506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.030 [2024-11-20 19:04:05.323673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.030 [2024-11-20 19:04:05.323681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.030 [2024-11-20 19:04:05.323687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.030 [2024-11-20 19:04:05.323693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.030 [2024-11-20 19:04:05.335907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.030 [2024-11-20 19:04:05.336327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.030 [2024-11-20 19:04:05.336344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.030 [2024-11-20 19:04:05.336351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.030 [2024-11-20 19:04:05.336518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.030 [2024-11-20 19:04:05.336686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.030 [2024-11-20 19:04:05.336694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.030 [2024-11-20 19:04:05.336700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.030 [2024-11-20 19:04:05.336706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.030 [2024-11-20 19:04:05.348912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.030 [2024-11-20 19:04:05.349259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.030 [2024-11-20 19:04:05.349276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.030 [2024-11-20 19:04:05.349283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.030 [2024-11-20 19:04:05.349456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.030 [2024-11-20 19:04:05.349629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.030 [2024-11-20 19:04:05.349637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.030 [2024-11-20 19:04:05.349643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.030 [2024-11-20 19:04:05.349650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.290 [2024-11-20 19:04:05.361912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.290 [2024-11-20 19:04:05.362329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-11-20 19:04:05.362346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.290 [2024-11-20 19:04:05.362353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.290 [2024-11-20 19:04:05.362526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.362699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.362708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.362714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.362720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.374925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.375330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.375348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.375355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.375523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.375690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.375698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.375705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.375711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.387913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.388310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.388327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.388335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.388506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.388675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.388683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.388689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.388695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.400904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.401304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.401321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.401328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.401496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.401665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.401673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.401679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.401685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.413865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.414242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.414258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.414266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.414432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.414600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.414608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.414614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.414620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.426821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.427220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.427237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.427243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.427411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.427579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.427591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.427597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.427603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.439805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.440200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.440221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.440228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.440395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.440564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.440572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.440578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.440584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.452788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.453210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.453227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.453234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.453401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.453569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.453577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.291 [2024-11-20 19:04:05.453583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.291 [2024-11-20 19:04:05.453589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.291 [2024-11-20 19:04:05.465797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.291 [2024-11-20 19:04:05.466173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-11-20 19:04:05.466189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.291 [2024-11-20 19:04:05.466196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.291 [2024-11-20 19:04:05.466389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.291 [2024-11-20 19:04:05.466562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.291 [2024-11-20 19:04:05.466571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.466577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.466587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.478780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.479109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.479126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.479133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.479324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.479497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.479506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.479512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.479518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.491713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.492129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.492145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.492152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.492324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.492494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.492503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.492510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.492516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.504719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.505121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.505137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.505144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.505335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.505509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.505517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.505524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.505531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.517603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.518062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.518078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.518086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.518264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.518439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.518447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.518453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.518459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.530663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.531120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.531138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.531145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.531323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.531497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.531506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.531512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.531518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.543769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.544205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.544223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.544231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.544403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.544577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.544586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.544592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.544598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.556823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.557234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.557251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.557259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.557430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.557599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.557608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.557614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.557620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.569844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.570291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.570307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.570314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.292 [2024-11-20 19:04:05.570482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.292 [2024-11-20 19:04:05.570650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.292 [2024-11-20 19:04:05.570658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.292 [2024-11-20 19:04:05.570664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.292 [2024-11-20 19:04:05.570670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.292 [2024-11-20 19:04:05.582698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.292 [2024-11-20 19:04:05.583113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-11-20 19:04:05.583130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.292 [2024-11-20 19:04:05.583137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.293 [2024-11-20 19:04:05.583309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.293 [2024-11-20 19:04:05.583478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.293 [2024-11-20 19:04:05.583486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.293 [2024-11-20 19:04:05.583492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.293 [2024-11-20 19:04:05.583499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.293 [2024-11-20 19:04:05.595716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.293 [2024-11-20 19:04:05.596126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-11-20 19:04:05.596142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.293 [2024-11-20 19:04:05.596150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.293 [2024-11-20 19:04:05.596340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.293 [2024-11-20 19:04:05.596515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.293 [2024-11-20 19:04:05.596527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.293 [2024-11-20 19:04:05.596533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.293 [2024-11-20 19:04:05.596539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.293 [2024-11-20 19:04:05.608751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.293 [2024-11-20 19:04:05.609168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-11-20 19:04:05.609185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.293 [2024-11-20 19:04:05.609192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.293 [2024-11-20 19:04:05.609369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.293 [2024-11-20 19:04:05.609544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.293 [2024-11-20 19:04:05.609553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.293 [2024-11-20 19:04:05.609559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.293 [2024-11-20 19:04:05.609565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.554 [2024-11-20 19:04:05.621712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.554 [2024-11-20 19:04:05.622112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-11-20 19:04:05.622129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.554 [2024-11-20 19:04:05.622137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.554 [2024-11-20 19:04:05.622314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.554 [2024-11-20 19:04:05.622487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.554 [2024-11-20 19:04:05.622496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.554 [2024-11-20 19:04:05.622502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.554 [2024-11-20 19:04:05.622508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.554 [2024-11-20 19:04:05.634621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.554 [2024-11-20 19:04:05.635032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-11-20 19:04:05.635048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.554 [2024-11-20 19:04:05.635055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.554 [2024-11-20 19:04:05.635232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.554 [2024-11-20 19:04:05.635405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.554 [2024-11-20 19:04:05.635413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.554 [2024-11-20 19:04:05.635420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.554 [2024-11-20 19:04:05.635429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.554 [2024-11-20 19:04:05.647601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.554 [2024-11-20 19:04:05.648028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-11-20 19:04:05.648045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.554 [2024-11-20 19:04:05.648052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.554 [2024-11-20 19:04:05.648225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.554 [2024-11-20 19:04:05.648393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.554 [2024-11-20 19:04:05.648402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.554 [2024-11-20 19:04:05.648408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.554 [2024-11-20 19:04:05.648414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.554 [2024-11-20 19:04:05.660627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.554 [2024-11-20 19:04:05.661090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-11-20 19:04:05.661107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.554 [2024-11-20 19:04:05.661114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.554 [2024-11-20 19:04:05.661291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.554 [2024-11-20 19:04:05.661474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.554 [2024-11-20 19:04:05.661483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.554 [2024-11-20 19:04:05.661489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.554 [2024-11-20 19:04:05.661495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.554 [2024-11-20 19:04:05.673557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.554 [2024-11-20 19:04:05.673952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-11-20 19:04:05.673969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.554 [2024-11-20 19:04:05.673976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.554 [2024-11-20 19:04:05.674143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.554 [2024-11-20 19:04:05.674334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.554 [2024-11-20 19:04:05.674343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.554 [2024-11-20 19:04:05.674350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.554 [2024-11-20 19:04:05.674356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.554 [2024-11-20 19:04:05.686569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.554 [2024-11-20 19:04:05.686980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.686999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.687006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.687173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.687366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.687374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.687381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.687387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.699601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.699965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.699982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.699989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.700162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.700341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.700350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.700356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.700363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.712492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.712777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.712793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.712801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.712968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.713136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.713144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.713151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.713156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.725402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.725741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.725758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.725765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.725936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.726104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.726113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.726119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.726125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.738369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.738737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.738753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.738760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.738932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.739105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.739114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.739120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.739126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.751341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.751743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.751760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.751766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.751934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.752123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.752131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.752137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.752144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.764244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.764555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.764571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.764579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.764746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.764915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.764927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.764933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.764939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.777131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.777533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.777550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.777558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.777730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.777904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.777913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.777919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.777926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.790140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.790426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.790444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.790451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.790624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.790797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.790805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.790812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.790818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.803099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.555 [2024-11-20 19:04:05.803452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-11-20 19:04:05.803469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.555 [2024-11-20 19:04:05.803477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.555 [2024-11-20 19:04:05.803650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.555 [2024-11-20 19:04:05.803823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.555 [2024-11-20 19:04:05.803832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.555 [2024-11-20 19:04:05.803838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.555 [2024-11-20 19:04:05.803844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.555 [2024-11-20 19:04:05.816062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.556 [2024-11-20 19:04:05.816428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-11-20 19:04:05.816445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.556 [2024-11-20 19:04:05.816452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.556 [2024-11-20 19:04:05.816625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.556 [2024-11-20 19:04:05.816798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.556 [2024-11-20 19:04:05.816806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.556 [2024-11-20 19:04:05.816814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.556 [2024-11-20 19:04:05.816820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.556 [2024-11-20 19:04:05.829049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.556 [2024-11-20 19:04:05.829403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-11-20 19:04:05.829420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.556 [2024-11-20 19:04:05.829428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.556 [2024-11-20 19:04:05.829606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.556 [2024-11-20 19:04:05.829776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.556 [2024-11-20 19:04:05.829784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.556 [2024-11-20 19:04:05.829790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.556 [2024-11-20 19:04:05.829796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.556 [2024-11-20 19:04:05.841982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.556 [2024-11-20 19:04:05.842392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-11-20 19:04:05.842409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.556 [2024-11-20 19:04:05.842417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.556 [2024-11-20 19:04:05.842588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.556 [2024-11-20 19:04:05.842761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.556 [2024-11-20 19:04:05.842769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.556 [2024-11-20 19:04:05.842776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.556 [2024-11-20 19:04:05.842781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.556 [2024-11-20 19:04:05.854945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.556 [2024-11-20 19:04:05.855353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-11-20 19:04:05.855373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.556 [2024-11-20 19:04:05.855380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.556 [2024-11-20 19:04:05.855547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.556 [2024-11-20 19:04:05.855715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.556 [2024-11-20 19:04:05.855723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.556 [2024-11-20 19:04:05.855729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.556 [2024-11-20 19:04:05.855735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.556 [2024-11-20 19:04:05.867931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.556 [2024-11-20 19:04:05.868333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-11-20 19:04:05.868351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.556 [2024-11-20 19:04:05.868358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.556 [2024-11-20 19:04:05.868526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.556 [2024-11-20 19:04:05.868695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.556 [2024-11-20 19:04:05.868704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.556 [2024-11-20 19:04:05.868710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.556 [2024-11-20 19:04:05.868716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.817 [2024-11-20 19:04:05.880916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.817 [2024-11-20 19:04:05.881336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.817 [2024-11-20 19:04:05.881353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.817 [2024-11-20 19:04:05.881361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.817 [2024-11-20 19:04:05.881541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.817 [2024-11-20 19:04:05.881710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.817 [2024-11-20 19:04:05.881718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.817 [2024-11-20 19:04:05.881724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.817 [2024-11-20 19:04:05.881730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.817 [2024-11-20 19:04:05.893934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.817 [2024-11-20 19:04:05.894296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.817 [2024-11-20 19:04:05.894314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.817 [2024-11-20 19:04:05.894321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.817 [2024-11-20 19:04:05.894492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.817 [2024-11-20 19:04:05.894659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.817 [2024-11-20 19:04:05.894667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.817 [2024-11-20 19:04:05.894673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.817 [2024-11-20 19:04:05.894679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.817 [2024-11-20 19:04:05.906863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.817 [2024-11-20 19:04:05.907283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.817 [2024-11-20 19:04:05.907301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.817 [2024-11-20 19:04:05.907309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.817 [2024-11-20 19:04:05.907488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.817 [2024-11-20 19:04:05.907656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.817 [2024-11-20 19:04:05.907665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.817 [2024-11-20 19:04:05.907671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.817 [2024-11-20 19:04:05.907677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.817 [2024-11-20 19:04:05.919859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.817 [2024-11-20 19:04:05.920242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.817 [2024-11-20 19:04:05.920258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.817 [2024-11-20 19:04:05.920266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.817 [2024-11-20 19:04:05.920433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.817 [2024-11-20 19:04:05.920601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.817 [2024-11-20 19:04:05.920609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.817 [2024-11-20 19:04:05.920616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.817 [2024-11-20 19:04:05.920621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3797323 Killed "${NVMF_APP[@]}" "$@" 00:26:43.817 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:43.817 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:43.817 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:43.817 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.817 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.817 [2024-11-20 19:04:05.932878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.817 [2024-11-20 19:04:05.933318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.817 [2024-11-20 19:04:05.933335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.817 [2024-11-20 19:04:05.933346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.817 [2024-11-20 19:04:05.933519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.817 [2024-11-20 19:04:05.933692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.817 [2024-11-20 19:04:05.933702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:05.933709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:05.933715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3798722 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3798722 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3798722 ']' 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.818 19:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.818 [2024-11-20 19:04:05.945936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:05.946316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:05.946334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:05.946341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:05.946514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:05.946691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:05.946699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:05.946706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:05.946712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:05.958928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:05.959364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:05.959381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:05.959389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:05.959562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:05.959736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:05.959750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:05.959757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:05.959763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:05.971837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:05.972182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:05.972199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:05.972215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:05.972383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:05.972555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:05.972563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:05.972569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:05.972575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:05.984837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:05.984954] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:26:43.818 [2024-11-20 19:04:05.984999] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.818 [2024-11-20 19:04:05.985278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:05.985297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:05.985305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:05.985485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:05.985653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:05.985661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:05.985668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:05.985674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:05.997977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:05.998385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:05.998403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:05.998411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:05.998585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:05.998762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:05.998772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:05.998779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:05.998785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:06.010988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:06.011307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:06.011325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:06.011333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:06.011515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:06.011684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:06.011693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:06.011699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:06.011705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:06.023932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:06.024283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:06.024300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:06.024308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:06.024476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:06.024644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:06.024653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:06.024660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:06.024667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:06.036886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:06.037310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:06.037328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:06.037336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.818 [2024-11-20 19:04:06.037509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.818 [2024-11-20 19:04:06.037682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.818 [2024-11-20 19:04:06.037691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.818 [2024-11-20 19:04:06.037698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.818 [2024-11-20 19:04:06.037708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.818 [2024-11-20 19:04:06.049905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.818 [2024-11-20 19:04:06.050346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.818 [2024-11-20 19:04:06.050362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.818 [2024-11-20 19:04:06.050370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.819 [2024-11-20 19:04:06.050542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.819 [2024-11-20 19:04:06.050717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.819 [2024-11-20 19:04:06.050726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.819 [2024-11-20 19:04:06.050733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.819 [2024-11-20 19:04:06.050740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.819 [2024-11-20 19:04:06.062964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.819 [2024-11-20 19:04:06.063354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.819 [2024-11-20 19:04:06.063372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.819 [2024-11-20 19:04:06.063381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.819 [2024-11-20 19:04:06.063553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.819 [2024-11-20 19:04:06.063728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.819 [2024-11-20 19:04:06.063738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.819 [2024-11-20 19:04:06.063744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.819 [2024-11-20 19:04:06.063750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.819 [2024-11-20 19:04:06.066159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.819 [2024-11-20 19:04:06.075937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.819 [2024-11-20 19:04:06.076402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.819 [2024-11-20 19:04:06.076421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.819 [2024-11-20 19:04:06.076430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.819 [2024-11-20 19:04:06.076599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.819 [2024-11-20 19:04:06.076768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.819 [2024-11-20 19:04:06.076778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.819 [2024-11-20 19:04:06.076786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.819 [2024-11-20 19:04:06.076794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.819 [2024-11-20 19:04:06.088996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.819 [2024-11-20 19:04:06.089332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.819 [2024-11-20 19:04:06.089350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.819 [2024-11-20 19:04:06.089357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.819 [2024-11-20 19:04:06.089526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.819 [2024-11-20 19:04:06.089696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.819 [2024-11-20 19:04:06.089705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.819 [2024-11-20 19:04:06.089712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.819 [2024-11-20 19:04:06.089718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.819 [2024-11-20 19:04:06.101944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.819 [2024-11-20 19:04:06.102292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.819 [2024-11-20 19:04:06.102310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.819 [2024-11-20 19:04:06.102318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.819 [2024-11-20 19:04:06.102485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.819 [2024-11-20 19:04:06.102655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.819 [2024-11-20 19:04:06.102664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.819 [2024-11-20 19:04:06.102672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.819 [2024-11-20 19:04:06.102679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.819 [2024-11-20 19:04:06.109503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.819 [2024-11-20 19:04:06.109528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.819 [2024-11-20 19:04:06.109535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.819 [2024-11-20 19:04:06.109541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.819 [2024-11-20 19:04:06.109546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.819 [2024-11-20 19:04:06.110904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.819 [2024-11-20 19:04:06.111019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.819 [2024-11-20 19:04:06.111020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.819 [2024-11-20 19:04:06.114951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.819 [2024-11-20 19:04:06.115393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.819 [2024-11-20 19:04:06.115411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.819 [2024-11-20 19:04:06.115419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.819 [2024-11-20 19:04:06.115594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.819 [2024-11-20 19:04:06.115772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.819 [2024-11-20 19:04:06.115780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.819 [2024-11-20 19:04:06.115787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.819 [2024-11-20 19:04:06.115793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.819 [2024-11-20 19:04:06.127995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.819 [2024-11-20 19:04:06.128444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.819 [2024-11-20 19:04:06.128462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:43.819 [2024-11-20 19:04:06.128470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:43.819 [2024-11-20 19:04:06.128644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:43.819 [2024-11-20 19:04:06.128819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.819 [2024-11-20 19:04:06.128828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.819 [2024-11-20 19:04:06.128835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.819 [2024-11-20 19:04:06.128841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.819 [2024-11-20 19:04:06.141038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.080 [2024-11-20 19:04:06.141470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.080 [2024-11-20 19:04:06.141489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.080 [2024-11-20 19:04:06.141497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.080 [2024-11-20 19:04:06.141671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.080 [2024-11-20 19:04:06.141843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.141851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.141858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.141865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.154065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.154491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.154510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.154518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.154692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.154866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.154875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.154882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.154894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.167096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.167520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.167540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.167548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.167721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.167895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.167904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.167910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.167917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.180158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.180575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.180593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.180601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.180775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.180948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.180957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.180963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.180970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.193157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.193570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.193587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.193594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.193768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.193941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.193950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.193957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.193963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.206150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.206525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.206542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.206549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.206722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.206896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.206904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.206911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.206917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.219266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.219673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.219690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.219697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.219870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.220042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.220051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.220057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.220063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.232242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.232663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.232679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.232687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.232859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.233032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.233041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.233047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.233054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.245287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.245679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.245696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.245709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.245882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.246056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.246064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.246070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.246077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.258268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.258691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.258716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.081 [2024-11-20 19:04:06.258888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.081 [2024-11-20 19:04:06.259062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.081 [2024-11-20 19:04:06.259070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.081 [2024-11-20 19:04:06.259077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.081 [2024-11-20 19:04:06.259083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.081 [2024-11-20 19:04:06.271278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.081 [2024-11-20 19:04:06.271687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.081 [2024-11-20 19:04:06.271704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.081 [2024-11-20 19:04:06.271711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.271884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.272058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.272066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.272072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.272079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.284310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.284720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.284737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.284745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.284918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.285092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.285103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.285110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.285116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.297301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.297712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.297729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.297737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.297910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.298084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.298092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.298100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.298106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.310289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.310691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.310707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.310714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.310886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.311060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.311068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.311075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.311081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 5032.33 IOPS, 19.66 MiB/s [2024-11-20T18:04:06.407Z] [2024-11-20 19:04:06.323373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.323708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.323725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.323733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.323906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.324079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.324088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.324094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.324104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.336449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.336859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.336876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.336883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.337057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.337234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.337243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.337250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.337257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.349441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.349849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.349866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.349873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.350046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.350223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.350232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.350239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.350245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.362420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.362828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.362845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.362852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.363024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.363198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.363212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.363219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.363225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.375419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.375824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.375840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.375848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.376020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.376193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.376205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.376213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.376220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.388437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.388864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.082 [2024-11-20 19:04:06.388881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.082 [2024-11-20 19:04:06.388890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.082 [2024-11-20 19:04:06.389062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.082 [2024-11-20 19:04:06.389240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.082 [2024-11-20 19:04:06.389249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.082 [2024-11-20 19:04:06.389256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.082 [2024-11-20 19:04:06.389262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.082 [2024-11-20 19:04:06.401457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.082 [2024-11-20 19:04:06.401741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.083 [2024-11-20 19:04:06.401758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.083 [2024-11-20 19:04:06.401765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.083 [2024-11-20 19:04:06.401938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.083 [2024-11-20 19:04:06.402112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.083 [2024-11-20 19:04:06.402120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.083 [2024-11-20 19:04:06.402126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.083 [2024-11-20 19:04:06.402132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.346 [2024-11-20 19:04:06.414488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.346 [2024-11-20 19:04:06.414903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.346 [2024-11-20 19:04:06.414919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.346 [2024-11-20 19:04:06.414927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.346 [2024-11-20 19:04:06.415103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.346 [2024-11-20 19:04:06.415282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.346 [2024-11-20 19:04:06.415291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.346 [2024-11-20 19:04:06.415297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.346 [2024-11-20 19:04:06.415303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.346 [2024-11-20 19:04:06.427480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.346 [2024-11-20 19:04:06.427882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.346 [2024-11-20 19:04:06.427899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.346 [2024-11-20 19:04:06.427906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.346 [2024-11-20 19:04:06.428079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.346 [2024-11-20 19:04:06.428257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.346 [2024-11-20 19:04:06.428266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.346 [2024-11-20 19:04:06.428272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.346 [2024-11-20 19:04:06.428279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.346 [2024-11-20 19:04:06.440460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.346 [2024-11-20 19:04:06.440862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.346 [2024-11-20 19:04:06.440879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.346 [2024-11-20 19:04:06.440886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.346 [2024-11-20 19:04:06.441058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.346 [2024-11-20 19:04:06.441235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.346 [2024-11-20 19:04:06.441244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.346 [2024-11-20 19:04:06.441251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.346 [2024-11-20 19:04:06.441257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.346 [2024-11-20 19:04:06.453443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.346 [2024-11-20 19:04:06.453850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.346 [2024-11-20 19:04:06.453867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.346 [2024-11-20 19:04:06.453874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.346 [2024-11-20 19:04:06.454046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.454223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.454235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.454242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.454249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.466427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.466833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.466849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.466856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.467028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.467208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.467218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.467224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.467230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.479412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.479841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.479858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.479865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.480037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.480215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.480224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.480231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.480237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.492409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.492813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.492829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.492837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.493009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.493182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.493190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.493197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.493211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.505400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.505806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.505823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.505830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.506002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.506176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.506184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.506190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.506197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.518378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.518788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.518805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.518813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.518985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.519158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.519166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.519172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.519178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.531357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.531760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.531777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.531784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.531957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.532131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.532139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.532146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.532152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.544357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.544790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.544806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.544813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.544986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.545159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.545167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.545175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.545181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.557398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.557827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.557844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.557851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.558024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.558198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.558213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.558221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.558227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.570426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.570761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.570777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.347 [2024-11-20 19:04:06.570785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.347 [2024-11-20 19:04:06.570957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.347 [2024-11-20 19:04:06.571131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.347 [2024-11-20 19:04:06.571139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.347 [2024-11-20 19:04:06.571145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.347 [2024-11-20 19:04:06.571151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.347 [2024-11-20 19:04:06.583516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.347 [2024-11-20 19:04:06.583921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.347 [2024-11-20 19:04:06.583938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.348 [2024-11-20 19:04:06.583945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.348 [2024-11-20 19:04:06.584123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.348 [2024-11-20 19:04:06.584301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.348 [2024-11-20 19:04:06.584310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.348 [2024-11-20 19:04:06.584317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.348 [2024-11-20 19:04:06.584323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.348 [2024-11-20 19:04:06.596537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.348 [2024-11-20 19:04:06.596875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.348 [2024-11-20 19:04:06.596891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.348 [2024-11-20 19:04:06.596899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.348 [2024-11-20 19:04:06.597072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.348 [2024-11-20 19:04:06.597250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.348 [2024-11-20 19:04:06.597259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.348 [2024-11-20 19:04:06.597266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.348 [2024-11-20 19:04:06.597272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.348 [2024-11-20 19:04:06.609612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.348 [2024-11-20 19:04:06.610018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.348 [2024-11-20 19:04:06.610034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.348 [2024-11-20 19:04:06.610042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.348 [2024-11-20 19:04:06.610220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.348 [2024-11-20 19:04:06.610394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.348 [2024-11-20 19:04:06.610402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.348 [2024-11-20 19:04:06.610409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.348 [2024-11-20 19:04:06.610416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.348 [2024-11-20 19:04:06.622597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.348 [2024-11-20 19:04:06.623019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.348 [2024-11-20 19:04:06.623036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.348 [2024-11-20 19:04:06.623045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.348 [2024-11-20 19:04:06.623224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.348 [2024-11-20 19:04:06.623398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.348 [2024-11-20 19:04:06.623410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.348 [2024-11-20 19:04:06.623416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.348 [2024-11-20 19:04:06.623423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.348 [2024-11-20 19:04:06.635614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.348 [2024-11-20 19:04:06.636039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.348 [2024-11-20 19:04:06.636055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.348 [2024-11-20 19:04:06.636062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.348 [2024-11-20 19:04:06.636239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.348 [2024-11-20 19:04:06.636413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.348 [2024-11-20 19:04:06.636424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.348 [2024-11-20 19:04:06.636430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.348 [2024-11-20 19:04:06.636436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.348 [2024-11-20 19:04:06.648655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.348 [2024-11-20 19:04:06.649020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.348 [2024-11-20 19:04:06.649036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.348 [2024-11-20 19:04:06.649044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.348 [2024-11-20 19:04:06.649222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.348 [2024-11-20 19:04:06.649397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.348 [2024-11-20 19:04:06.649406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.348 [2024-11-20 19:04:06.649413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.348 [2024-11-20 19:04:06.649421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.348 [2024-11-20 19:04:06.661767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.348 [2024-11-20 19:04:06.662110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.348 [2024-11-20 19:04:06.662126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.348 [2024-11-20 19:04:06.662133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.348 [2024-11-20 19:04:06.662309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.348 [2024-11-20 19:04:06.662483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.348 [2024-11-20 19:04:06.662491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.348 [2024-11-20 19:04:06.662499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.348 [2024-11-20 19:04:06.662509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.678 [2024-11-20 19:04:06.674883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.678 [2024-11-20 19:04:06.675309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.678 [2024-11-20 19:04:06.675326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.678 [2024-11-20 19:04:06.675334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.678 [2024-11-20 19:04:06.675507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.678 [2024-11-20 19:04:06.675679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.678 [2024-11-20 19:04:06.675687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.678 [2024-11-20 19:04:06.675694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.678 [2024-11-20 19:04:06.675700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.678 [2024-11-20 19:04:06.687927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.678 [2024-11-20 19:04:06.688287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.678 [2024-11-20 19:04:06.688304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.678 [2024-11-20 19:04:06.688312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.678 [2024-11-20 19:04:06.688485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.678 [2024-11-20 19:04:06.688658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.678 [2024-11-20 19:04:06.688667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.678 [2024-11-20 19:04:06.688673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.678 [2024-11-20 19:04:06.688679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.678 [2024-11-20 19:04:06.701034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.678 [2024-11-20 19:04:06.701390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.678 [2024-11-20 19:04:06.701407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.678 [2024-11-20 19:04:06.701414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.678 [2024-11-20 19:04:06.701586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.701760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.701768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.701774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.701781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.714130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.714543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.714564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.714571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.714744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.714917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.714926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.714932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.714938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.727301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.727734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.727751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.727759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.727931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.728105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.728113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.728120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.728126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.740312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.740740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.740757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.740765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.740937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.741111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.741120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.741127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.741133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.753327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.753736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.753752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.753761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.753936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.754112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.754120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.754128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.754135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.766342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.766749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.766766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.766774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.766948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.767122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.767131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.767138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.767145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.779329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.779732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.779750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.779757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.779928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.780102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.780112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.780120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.780126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.792316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.792717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.792734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.792741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.792914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.793087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.793098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.679 [2024-11-20 19:04:06.793105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.679 [2024-11-20 19:04:06.793111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.679 [2024-11-20 19:04:06.805317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.679 [2024-11-20 19:04:06.805728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.679 [2024-11-20 19:04:06.805745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.679 [2024-11-20 19:04:06.805752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.679 [2024-11-20 19:04:06.805925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.679 [2024-11-20 19:04:06.806099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.679 [2024-11-20 19:04:06.806107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.806113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.806119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.680 [2024-11-20 19:04:06.818308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.680 [2024-11-20 19:04:06.818733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.680 [2024-11-20 19:04:06.818750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.680 [2024-11-20 19:04:06.818758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.680 [2024-11-20 19:04:06.818931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.680 [2024-11-20 19:04:06.819105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.680 [2024-11-20 19:04:06.819115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.819125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.819134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 [2024-11-20 19:04:06.831352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.680 [2024-11-20 19:04:06.831702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.680 [2024-11-20 19:04:06.831719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.680 [2024-11-20 19:04:06.831726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.680 [2024-11-20 19:04:06.831899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.680 [2024-11-20 19:04:06.832075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.680 [2024-11-20 19:04:06.832084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.832093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.832099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 [2024-11-20 19:04:06.844366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.680 [2024-11-20 19:04:06.844775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.680 [2024-11-20 19:04:06.844791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.680 [2024-11-20 19:04:06.844798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.680 [2024-11-20 19:04:06.844970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.680 [2024-11-20 19:04:06.845144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.680 [2024-11-20 19:04:06.845153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.845159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.845165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.680 [2024-11-20 19:04:06.857366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.680 [2024-11-20 19:04:06.857787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.680 [2024-11-20 19:04:06.857804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.680 [2024-11-20 19:04:06.857811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.680 [2024-11-20 19:04:06.857983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.680 [2024-11-20 19:04:06.857990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.680 [2024-11-20 19:04:06.858157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.680 [2024-11-20 19:04:06.858166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.858172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.858178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.680 [2024-11-20 19:04:06.870374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.680 [2024-11-20 19:04:06.870781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.680 [2024-11-20 19:04:06.870799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.680 [2024-11-20 19:04:06.870807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.680 [2024-11-20 19:04:06.870978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.680 [2024-11-20 19:04:06.871152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.680 [2024-11-20 19:04:06.871162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.871168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.871175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 [2024-11-20 19:04:06.883374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.680 [2024-11-20 19:04:06.883806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.680 [2024-11-20 19:04:06.883823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.680 [2024-11-20 19:04:06.883831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.680 [2024-11-20 19:04:06.884005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.680 [2024-11-20 19:04:06.884179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.680 [2024-11-20 19:04:06.884189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.884195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.884206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 [2024-11-20 19:04:06.896400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.680 [2024-11-20 19:04:06.896838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.680 [2024-11-20 19:04:06.896856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.680 [2024-11-20 19:04:06.896864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.680 [2024-11-20 19:04:06.897036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.680 [2024-11-20 19:04:06.897217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.680 [2024-11-20 19:04:06.897227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.680 [2024-11-20 19:04:06.897234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.680 [2024-11-20 19:04:06.897241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.680 Malloc0 00:26:44.680 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 [2024-11-20 19:04:06.909428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.681 [2024-11-20 19:04:06.909879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.681 [2024-11-20 19:04:06.909897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.681 [2024-11-20 19:04:06.909904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.681 [2024-11-20 19:04:06.910077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.681 [2024-11-20 19:04:06.910258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.681 [2024-11-20 19:04:06.910269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.681 [2024-11-20 19:04:06.910276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.681 [2024-11-20 19:04:06.910282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.681 [2024-11-20 19:04:06.922472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.681 [2024-11-20 19:04:06.922889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.681 [2024-11-20 19:04:06.922907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbf500 with addr=10.0.0.2, port=4420 00:26:44.681 [2024-11-20 19:04:06.922916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf500 is same with the state(6) to be set 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 [2024-11-20 19:04:06.923088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbf500 (9): Bad file descriptor 00:26:44.681 [2024-11-20 19:04:06.923267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.681 [2024-11-20 19:04:06.923278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.681 [2024-11-20 19:04:06.923284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.681 [2024-11-20 19:04:06.923291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.681 [2024-11-20 19:04:06.925834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.681 19:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3797654 00:26:44.681 [2024-11-20 19:04:06.935476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.681 [2024-11-20 19:04:06.962681] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:46.368 4888.00 IOPS, 19.09 MiB/s [2024-11-20T18:04:09.629Z] 5708.50 IOPS, 22.30 MiB/s [2024-11-20T18:04:10.567Z] 6366.22 IOPS, 24.87 MiB/s [2024-11-20T18:04:11.503Z] 6867.00 IOPS, 26.82 MiB/s [2024-11-20T18:04:12.441Z] 7287.09 IOPS, 28.47 MiB/s [2024-11-20T18:04:13.378Z] 7646.67 IOPS, 29.87 MiB/s [2024-11-20T18:04:14.756Z] 7926.23 IOPS, 30.96 MiB/s [2024-11-20T18:04:15.693Z] 8176.29 IOPS, 31.94 MiB/s 00:26:53.368 Latency(us) 00:26:53.368 [2024-11-20T18:04:15.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.368 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:53.368 Verification LBA range: start 0x0 length 0x4000 00:26:53.368 Nvme1n1 : 15.00 8397.60 32.80 13046.73 0.00 5949.67 659.26 13544.11 00:26:53.368 [2024-11-20T18:04:15.693Z] =================================================================================================================== 00:26:53.368 [2024-11-20T18:04:15.693Z] Total : 8397.60 32.80 13046.73 0.00 5949.67 659.26 13544.11 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.368 rmmod nvme_tcp 00:26:53.368 rmmod nvme_fabrics 00:26:53.368 rmmod nvme_keyring 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3798722 ']' 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3798722 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3798722 ']' 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3798722 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3798722 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3798722' 00:26:53.368 killing process with pid 3798722 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3798722 00:26:53.368 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3798722 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.628 19:04:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.164 00:26:56.164 real 0m26.033s 00:26:56.164 user 1m0.607s 00:26:56.164 sys 0m6.746s 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.164 ************************************ 00:26:56.164 END TEST nvmf_bdevperf 00:26:56.164 ************************************ 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.164 ************************************ 00:26:56.164 START TEST nvmf_target_disconnect 00:26:56.164 ************************************ 00:26:56.164 19:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.164 * Looking for test storage... 00:26:56.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:56.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.164 --rc genhtml_branch_coverage=1 00:26:56.164 --rc genhtml_function_coverage=1 00:26:56.164 --rc genhtml_legend=1 00:26:56.164 --rc geninfo_all_blocks=1 00:26:56.164 --rc geninfo_unexecuted_blocks=1 00:26:56.164 00:26:56.164 ' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:56.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.164 --rc genhtml_branch_coverage=1 00:26:56.164 --rc genhtml_function_coverage=1 00:26:56.164 --rc genhtml_legend=1 00:26:56.164 --rc geninfo_all_blocks=1 00:26:56.164 --rc geninfo_unexecuted_blocks=1 00:26:56.164 00:26:56.164 ' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:56.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.164 --rc genhtml_branch_coverage=1 00:26:56.164 --rc genhtml_function_coverage=1 00:26:56.164 --rc genhtml_legend=1 00:26:56.164 --rc geninfo_all_blocks=1 00:26:56.164 --rc geninfo_unexecuted_blocks=1 00:26:56.164 00:26:56.164 ' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:56.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.164 --rc genhtml_branch_coverage=1 00:26:56.164 --rc genhtml_function_coverage=1 00:26:56.164 --rc genhtml_legend=1 00:26:56.164 --rc geninfo_all_blocks=1 00:26:56.164 --rc geninfo_unexecuted_blocks=1 00:26:56.164 00:26:56.164 ' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.164 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.165 19:04:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:02.732 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:02.732 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:02.732 Found net devices under 0000:86:00.0: cvl_0_0 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:02.732 Found net devices under 0000:86:00.1: cvl_0_1 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.732 19:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:27:02.732 00:27:02.732 --- 10.0.0.2 ping statistics --- 00:27:02.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.732 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:27:02.732 00:27:02.732 --- 10.0.0.1 ping statistics --- 00:27:02.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.732 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 ************************************ 00:27:02.732 START TEST nvmf_target_disconnect_tc1 00:27:02.732 ************************************ 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.732 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.733 [2024-11-20 19:04:24.224587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.733 [2024-11-20 19:04:24.224634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x765ab0 with addr=10.0.0.2, port=4420 00:27:02.733 [2024-11-20 19:04:24.224651] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:02.733 [2024-11-20 19:04:24.224665] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:02.733 [2024-11-20 19:04:24.224671] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:02.733 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:02.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:02.733 Initializing NVMe Controllers 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:02.733 00:27:02.733 real 0m0.106s 00:27:02.733 user 0m0.043s 00:27:02.733 sys 0m0.062s 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:02.733 ************************************ 00:27:02.733 END TEST nvmf_target_disconnect_tc1 00:27:02.733 ************************************ 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.733 ************************************ 00:27:02.733 START TEST nvmf_target_disconnect_tc2 00:27:02.733 ************************************ 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3803758 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3803758 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3803758 ']' 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.733 19:04:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.733 [2024-11-20 19:04:24.370879] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:27:02.733 [2024-11-20 19:04:24.370924] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.733 [2024-11-20 19:04:24.454103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:02.733 [2024-11-20 19:04:24.495398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.733 [2024-11-20 19:04:24.495437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.733 [2024-11-20 19:04:24.495444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.733 [2024-11-20 19:04:24.495451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.733 [2024-11-20 19:04:24.495456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.733 [2024-11-20 19:04:24.496991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:02.733 [2024-11-20 19:04:24.497107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:02.733 [2024-11-20 19:04:24.497194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:02.733 [2024-11-20 19:04:24.497194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.989 Malloc0 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.989 [2024-11-20 19:04:25.284624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.989 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.990 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.990 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.990 [2024-11-20 19:04:25.313577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3803932 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:03.247 19:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.160 19:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3803758 00:27:05.160 19:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Write completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 [2024-11-20 19:04:27.341698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.160 Read completed with error (sct=0, sc=8) 00:27:05.160 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 [2024-11-20 19:04:27.341918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Write completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 Read completed with error (sct=0, sc=8) 00:27:05.161 starting I/O failed 00:27:05.161 [2024-11-20 19:04:27.342117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:05.161 [2024-11-20 19:04:27.342247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.342272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.342397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.342522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.342554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.342706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.342739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.342880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.342914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.343051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.343084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.343280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.343314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.343504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.343536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.343649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.343682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.343899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.343933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.344097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.344253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.344389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.344476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.344570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.344700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.344861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.344996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.345028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.161 [2024-11-20 19:04:27.345185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.161 [2024-11-20 19:04:27.345254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.161 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.345465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.345500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.345750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.345783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.345911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.345943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.346131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.346163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.346302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.346337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.346454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.346488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.346593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.346625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.346823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.346855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.346972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.347012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.347189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.347359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.347534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.347579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.347670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.347685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.347767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.347779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.347990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.348023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.348162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.348195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.348320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.348354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.348480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.348491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.348574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.348586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.348733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.348766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.348884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.348916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.349158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.349191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.349459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.349471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.349551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.349584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.349724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.349757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.349942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.349974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.350243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.350279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.350399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.350431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.350548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.350571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.350658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.350681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.350779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.350802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.351003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.351026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.351139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.351162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.351342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.351366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.351473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.351496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.351666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.351689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 [2024-11-20 19:04:27.351785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.162 [2024-11-20 19:04:27.351809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.162 qpair failed and we were unable to recover it. 00:27:05.162 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Read completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 Write completed with error (sct=0, sc=8) 00:27:05.163 starting I/O failed 00:27:05.163 [2024-11-20 19:04:27.352458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:05.163 [2024-11-20 19:04:27.352633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.352671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.352841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.352865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.352967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.352990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.353148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.353170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.353342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.353366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.353478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.353500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.353654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.353677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.353783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.353805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.353922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.353945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.354056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.354079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.354189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.354217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.354303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.354326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.354445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.354468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.354555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.354578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.354677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.354700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.354794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.354816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.355044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.355076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.355212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.355246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.355369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.355402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.355513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.355556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.355718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.355741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.355843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.355865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.355965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.355988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.356149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.356182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.356312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.356345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.356451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.163 [2024-11-20 19:04:27.356484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.163 qpair failed and we were unable to recover it. 00:27:05.163 [2024-11-20 19:04:27.356680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.356712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.356824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.356857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.356981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.357013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.357221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.357256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.357371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.357415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.357541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.357564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.357727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.357750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.357842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.357864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.357967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.357999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.358176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.358250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.358395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.358428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.358543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.358565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.358655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.358677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.358847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.358869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.358991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.359024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.359155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.359188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.359323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.359357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.359530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.359562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.359749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.359782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.359970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.360002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.360119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.360152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.360260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.360284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.360463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.360485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.360582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.360615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.360731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.360764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.360868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.360901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.361079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.361110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.361237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.361272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.361486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.361519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.361634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.361666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.361784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.361817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.361934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.361967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.362091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.362123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.362251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.362284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.362475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.362508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.362693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.362725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.362847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.362879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.164 [2024-11-20 19:04:27.363006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.164 [2024-11-20 19:04:27.363040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.164 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.363235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.363269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.363384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.363417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.363525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.363557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.363674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.363707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.363976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.364009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.364195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.364238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.364357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.364390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.364572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.364604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.364715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.364747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.364857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.364888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.365010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.365042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.365227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.365268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.365394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.365427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.365626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.365659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.365767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.365801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.365928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.365960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.366136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.366169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.366343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.366377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.366552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.366584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.366705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.366737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.366860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.366895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.367026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.367058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.367184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.367226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.367370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.367402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.367530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.367563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.367753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.367786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.367917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.367949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.368079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.368111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.368302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.368349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.368478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.368517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.368692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.368724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.368828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.165 [2024-11-20 19:04:27.368866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.165 qpair failed and we were unable to recover it. 00:27:05.165 [2024-11-20 19:04:27.368987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.369019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.369210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.369243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.369512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.369546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.369742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.369775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.369916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.369949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.370142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.370175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.370403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.370439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.370630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.370662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.370777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.370810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.371007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.371041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.371235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.371269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.371386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.371418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.371547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.371579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.371700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.371732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.371841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.371874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.372115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.372147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.372337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.372371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.372491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.372524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.372739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.372770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.372903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.372941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.373152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.373185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.373394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.373427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.373607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.373639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.373854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.373886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.374068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.374100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.374231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.374264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.374389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.374421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.374629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.374661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.374789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.374821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.374993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.375026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.375157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.375189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.375380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.375413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.375518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.375552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.375730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.375762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.375888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.375921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.376097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.376130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.376319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.376352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.166 [2024-11-20 19:04:27.376613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.166 [2024-11-20 19:04:27.376646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.166 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.376837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.376869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.377092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.377125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.377256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.377290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.377418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.377451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.377639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.377672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.377810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.377842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.378047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.378080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.378220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.378253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.378378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.378411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.378521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.378553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.378677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.378710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.378909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.378942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.379056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.379089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.379289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.379324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.379507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.379539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.379733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.379765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.379949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.379982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.380097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.380129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.380395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.380429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.380621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.380653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.380778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.380811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.380923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.380962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.381137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.381170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.381363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.381396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.381518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.381550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.381660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.381699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.381885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.381918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.382022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.382056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.382250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.382284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.382474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.382507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.382686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.382719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.382843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.382877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.382991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.383024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.383214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.383248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.383356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.383387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.383512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.383552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.383670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.383702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.167 [2024-11-20 19:04:27.383875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.167 [2024-11-20 19:04:27.383908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.167 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.384020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.384052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.384237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.384271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.384457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.384490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.384617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.384649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.384848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.384880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.385013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.385045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.385153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.385185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.385383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.385415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.385611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.385644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.385838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.385871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.385994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.386027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.386135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.386167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.386331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.386364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.386479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.386512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.386644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.386677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.386856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.386888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.387069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.387101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.387302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.387336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.387453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.387485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.387593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.387626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.387831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.387863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.388008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.388042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.388227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.388260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.388471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.388517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.388719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.388751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.388863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.388896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.389016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.389048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.389245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.389279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.389384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.389416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.389634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.389666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.389849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.389882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.390056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.390089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.390214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.390247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.390435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.390468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.390712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.390745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.390943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.390976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.391157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.391189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.168 qpair failed and we were unable to recover it. 00:27:05.168 [2024-11-20 19:04:27.391331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.168 [2024-11-20 19:04:27.391365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.391501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.391540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.391783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.391816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.392055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.392087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.392273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.392307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.392518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.392552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.392737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.392769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.392987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.393020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.393222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.393256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.393442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.393475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.393676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.393709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.393979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.394012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.394136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.394168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.394314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.394349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.394463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.394496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.394676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.394708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.394841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.394874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.395144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.395177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.395479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.395512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.395646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.395679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.395867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.395900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.396032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.396064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.396257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.396291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.396467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.396500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.396623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.396656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.396846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.396879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.397008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.397048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.397299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.397335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.397461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.397493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.397616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.397648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.397858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.397891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.398087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.398119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.398241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.398274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.398480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.169 [2024-11-20 19:04:27.398513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.169 qpair failed and we were unable to recover it. 00:27:05.169 [2024-11-20 19:04:27.398715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.398747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.398861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.398893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.399135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.399169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.399295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.399329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.399511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.399543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.400952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.401007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.401239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.401275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.401522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.401554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.401772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.401805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.401909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.401942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.402121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.402153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.402339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.402373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.402501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.402535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.402722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.402755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.402877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.402911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.403088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.403121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.403239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.403274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.403463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.403496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.403674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.403707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.403894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.403927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.404114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.404147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.404359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.404393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.404576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.404609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.404726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.404759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.404865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.404898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.405106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.405138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.405328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.405364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.405539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.405573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.405699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.405745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.405958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.405991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.406109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.406142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.406366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.406401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.406530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.406570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.406702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.406734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.406853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.406887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.407022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.407055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.407225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.407259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.170 [2024-11-20 19:04:27.407396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.170 [2024-11-20 19:04:27.407429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.170 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.407540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.407573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.407692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.407725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.407861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.407893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.408007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.408040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.408230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.408263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.408465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.408497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.408673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.408707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.408821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.408853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.409036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.409069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.409183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.409240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.409369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.409402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.409524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.409556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.409749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.409781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.409918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.409951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.410067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.410100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.410225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.410259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.410390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.410422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.410536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.410569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.410683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.410715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.410834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.410868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.411081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.411113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.411242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.411277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.411446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.411478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.411591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.411625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.411753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.411786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.411915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.411948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.412051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.412084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.412210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.412244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.412347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.412380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.412561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.412593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.412726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.412759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.412929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.412961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.413145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.413178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.413307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.413340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.413453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.413492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.413670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.413701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.413872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.413905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.171 [2024-11-20 19:04:27.414044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.171 [2024-11-20 19:04:27.414077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.171 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.414263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.414298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.414415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.414449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.414574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.414607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.414729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.414762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.414953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.414985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.415183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.415226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.415403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.415437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.415557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.415589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.415765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.415799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.415929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.415962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.416157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.416190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.416382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.416415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.416531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.416564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.416677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.416709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.416890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.416924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.417062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.417096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.417343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.417377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.417505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.417538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.417672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.417705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.417821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.417853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.417960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.417993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.418104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.418136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.418245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.418278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.418450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.418540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.418707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.418745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.418871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.418905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.419098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.419131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.419262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.419298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.419471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.419503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.419791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.419825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.419933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.419967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.420091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.420123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.420255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.420289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.420407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.420439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.420644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.420677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.420865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.420897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.421026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.421069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.172 [2024-11-20 19:04:27.421241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.172 [2024-11-20 19:04:27.421276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.172 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.421385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.421417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.421522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.421555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.421660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.421692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.421808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.421841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.422016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.422050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.422164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.422198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.422327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.422360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.422559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.422593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.422726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.422759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.422874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.422908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.423013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.423046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.423294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.423329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.423513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.423547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.423655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.423693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.423798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.423831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.423943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.423977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.424228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.424262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.424448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.424482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.424651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.424683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.424807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.424841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.425028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.425061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.425333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.425367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.425583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.425615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.425809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.425843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.425965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.425999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.426118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.426150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.426289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.426323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.426439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.426472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.426703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.426735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.426846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.426882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.427080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.427112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.427244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.427278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.427394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.427428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.427552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.427584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.427705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.427738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.427860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.173 [2024-11-20 19:04:27.427892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.173 qpair failed and we were unable to recover it. 00:27:05.173 [2024-11-20 19:04:27.428026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.428060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.428188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.428230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.428342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.428381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.428491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.428524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.428700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.428732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.428948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.428980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.429099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.429134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.429333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.429367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.429573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.429606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.429784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.429816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.429930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.429962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.430169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.430209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.430333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.430539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.430572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.430706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.430739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.430859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.430891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.431025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.431058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.431167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.431212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.431457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.431490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.431624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.431657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.431844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.431876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.431994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.432027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.432134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.432167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.432325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.432358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.432477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.432510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.432647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.432681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.432857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.432902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.433027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.433061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.433182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.433225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.433406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.433444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.433634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.433666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.433800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.433832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.434008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.434041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.434166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.174 [2024-11-20 19:04:27.434198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.174 qpair failed and we were unable to recover it. 00:27:05.174 [2024-11-20 19:04:27.434383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.434415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.434532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.434572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.434688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.434721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.434894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.434927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.435106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.435140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.435260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.435294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.435415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.435447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.435662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.435695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.435874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.435907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.436093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.436126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.436267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.436302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.436421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.436454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.436558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.436590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.436764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.436796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.436903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.436936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.437051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.437083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.437267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.437301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.437417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.437449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.437661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.437694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.437831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.437864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.438052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.438085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.438221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.438255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.438437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.438470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.438664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.438697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.438832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.438865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.439050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.439083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.439213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.439247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.439426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.439460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.439582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.439614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.439722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.439756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.439860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.439893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.440132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.440165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.440299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.440333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.440517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.440551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.440659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.440692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.440894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.440933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.441063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.441095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.441219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.175 [2024-11-20 19:04:27.441253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.175 qpair failed and we were unable to recover it. 00:27:05.175 [2024-11-20 19:04:27.441436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.441468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.441659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.441693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.441808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.441840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.441962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.441996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.442174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.442216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.442346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.442379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.442563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.442596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.442769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.442803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.442950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.442983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.443100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.443133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.443312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.443347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.443466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.443508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.443689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.443722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.443844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.443877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.444065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.444098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.444221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.444255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.444390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.444423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.444539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.444571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.444686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.444720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.444843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.444876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.444987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.445019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.445145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.445178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.445320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.445354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.445468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.445501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.445616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.445649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.445760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.445793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.445969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.446003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.446111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.446145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.446272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.446307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.446486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.446518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.446700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.446733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.446854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.446887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.447086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.447118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.447228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.447263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.447390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.447424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.447601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.447634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.447742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.447776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.447891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.176 [2024-11-20 19:04:27.447931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.176 qpair failed and we were unable to recover it. 00:27:05.176 [2024-11-20 19:04:27.448041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.448074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.448287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.448322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.448433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.448466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.448642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.448675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.448793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.448825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.448967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.449002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.449113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.449145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.449337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.449372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.449569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.449602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.449728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.449761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.449872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.449904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.450077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.450110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.450241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.450277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.450469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.450508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.450618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.450657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.450836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.450876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.450986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.451022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.451242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.451277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.451515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.451548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.451726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.451759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.451868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.451913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.452140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.452171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.452285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.452317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.452502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.452533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.452701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.452731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.452908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.452938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.453062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.453092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.453346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.453378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.453580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.453612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.453810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.453843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.453970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.454002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.454222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.454257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.454367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.454400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.454599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.454632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.454846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.454880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.454991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.455024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.455198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.455262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.177 [2024-11-20 19:04:27.455377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.177 [2024-11-20 19:04:27.455410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.177 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.455540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.455572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.455752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.455793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.456029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.456058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.456229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.456260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.456438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.456469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.456726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.456756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.457012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.457042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.457250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.457282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.457471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.457501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.457624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.457654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.457842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.457871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.457986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.458259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.458294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.458482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.458514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.458708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.458742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.458985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.459017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.459146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.459176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.459304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.459335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.459544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.459575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.459697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.459727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.459987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.460021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.460265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.460299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.460431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.460463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.460590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.460623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.460741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.460771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.460963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.460996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.461119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.461152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.461291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.461325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.461445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.461478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.461665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.461694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.461793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.461824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.461987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.462031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.462147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.462180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.462309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.462342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.462585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.462619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.462805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.462838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.463019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.178 [2024-11-20 19:04:27.463052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.178 qpair failed and we were unable to recover it. 00:27:05.178 [2024-11-20 19:04:27.463263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.463299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.463426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.463460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.463673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.463705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.463814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.463847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.463973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.464012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.464183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.464227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.464332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.464365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.464559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.464591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.464765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.464798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.464904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.464937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.465057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.465090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.465268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.465302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.465491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.465523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.465646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.465680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.465852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.465885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.466002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.466035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.466212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.466246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.466418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.466451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.466560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.466592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.466834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.466867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.467002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.467034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.467242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.467276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.467392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.467424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.467666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.467699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.467816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.467849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.467967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.467999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.468111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.468144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.468266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.468303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.468429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.468461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.468581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.468613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.468739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.468771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.469017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.469050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.469248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.469281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.469463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.469497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.179 qpair failed and we were unable to recover it. 00:27:05.179 [2024-11-20 19:04:27.469616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.179 [2024-11-20 19:04:27.469649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.469827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.469860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.469976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.470010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.470198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.470239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.470352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.470385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.470521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.470555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.470672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.470705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.470916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.470950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.471126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.471161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.471311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.471346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.471475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.471514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.471787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.471998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.472030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.472165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.472198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.472410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.472445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.472675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.472707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.472897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.472929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.473078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.473238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.473272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.473389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.473421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.473528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.473561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.473730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.473762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.473931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.473964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.474081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.474113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.474304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.474338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.180 [2024-11-20 19:04:27.474460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.180 [2024-11-20 19:04:27.474493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.180 qpair failed and we were unable to recover it. 00:27:05.454 [2024-11-20 19:04:27.474733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.454 [2024-11-20 19:04:27.474766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.454 qpair failed and we were unable to recover it. 00:27:05.454 [2024-11-20 19:04:27.474968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.454 [2024-11-20 19:04:27.475001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.454 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.475181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.475221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.475473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.475505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.475626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.475659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.475852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.475885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.476172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.476211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.476389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.476428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.476635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.476668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.476811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.476843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.477020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.477052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.477196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.477258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.477448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.477481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.477591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.477624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.477798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.477832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.477949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.477981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.478153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.478185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.478310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.478345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.478533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.478566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.478681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.478715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.478915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.478948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.479168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.479228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.479426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.479460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.479658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.479691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.479955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.479994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.480126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.480158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.480306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.480345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.480475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.480509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.480784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.480817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.480992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.481025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.481306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.481342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.481596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.481629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.481835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.481867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.481970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.482003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.482137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.482171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.482299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.482332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.482435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.455 [2024-11-20 19:04:27.482468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.455 qpair failed and we were unable to recover it. 00:27:05.455 [2024-11-20 19:04:27.482588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.482620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.482832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.482865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.483042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.483074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.483315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.483350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.483525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.483557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.483732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.483769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.483952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.483984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.484234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.484267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.484460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.484493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.484627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.484660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.484850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.484882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.485017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.485050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.485252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.485287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.485493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.485525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.485665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.485697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.485873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.485905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.486095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.486127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.486305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.486340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.486463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.486496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.486629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.486662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.486830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.486862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.487053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.487087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.487222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.487256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.487428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.487461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.487634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.487667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.487933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.487965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.488139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.488172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.488299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.488338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.488539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.488571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.488781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.488813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.489015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.489053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.489296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.489330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.489446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.489478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.489718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.489752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.489933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.489966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.456 qpair failed and we were unable to recover it. 00:27:05.456 [2024-11-20 19:04:27.490075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.456 [2024-11-20 19:04:27.490107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.490310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.490344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.490519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.490552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.490764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.490803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.491046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.491078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.491270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.491304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.491485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.491519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.491702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.491734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.491973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.492006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.492188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.492231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.492353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.492386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.492513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.492545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.492743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.492776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.492893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.492926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.493102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.493135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.493309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.493344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.493548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.493581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.493724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.493757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.493961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.493994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.494119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.494151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.494285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.494319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.494445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.494478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.494652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.494683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.494972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.495005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.495132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.495165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.495309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.495342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.495447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.495480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.495719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.495752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.495992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.496024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.496146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.496178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.496390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.496424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.496611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.496643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.496787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.496830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.497025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.497058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.497259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.457 [2024-11-20 19:04:27.497293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.457 qpair failed and we were unable to recover it. 00:27:05.457 [2024-11-20 19:04:27.497538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.497570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.497740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.497774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.497984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.498016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.498187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.498228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.498418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.498452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.498658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.498690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.498978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.499011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.499241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.499275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.499521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.499554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.499814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.499847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.500138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.500171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.500461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.500495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.500626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.500659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.500783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.500816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.501059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.501092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.501276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.501312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.501514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.501546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.501729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.501762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.501936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.501968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.502155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.502188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.502437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.502470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.502664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.502697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.502817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.502850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.502975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.503008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.503236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.503309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.503507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.503543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.503733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.503766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.504008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.504041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.504161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.504194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.504451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.504484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.504681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.504714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.504900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.504933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.505109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.505141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.505325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.505360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.505613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.505647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.458 qpair failed and we were unable to recover it. 00:27:05.458 [2024-11-20 19:04:27.505896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.458 [2024-11-20 19:04:27.505928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.506104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.506138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.506384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.506428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.506628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.506661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.506846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.506880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.507131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.507163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.507373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.507412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.507545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.507579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.507830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.507863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.508078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.508111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.508378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.508412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.508545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.508578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.508723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.508756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.508876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.508908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.509096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.509130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.509246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.509279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.509479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.509513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.509703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.509737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.509928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.509961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.510211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.510244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.510416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.510450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.510662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.510696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.510891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.510923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.511051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.511084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.511378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.511413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.511620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.511652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.511928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.511961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.512083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.512116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.512266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.512301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.512511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.512545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.459 qpair failed and we were unable to recover it. 00:27:05.459 [2024-11-20 19:04:27.512760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.459 [2024-11-20 19:04:27.512794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.512986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.513018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.513198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.513239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.513424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.513458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.513653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.513687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.513866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.513898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.514102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.514135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.514269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.514304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.514431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.514465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.514739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.514773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.514901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.514933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.515121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.515155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.515344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.515382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.515559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.515593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.515711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.515744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.515936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.515969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.516082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.516115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.516242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.516276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.516460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.516491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.516709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.516743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.516983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.517017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.517194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.517250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.517435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.517468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.517709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.517741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.517933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.517967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.518195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.518236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.518386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.518421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.518606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.518639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.518759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.518793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.518908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.518941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.519059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.519092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.519218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.519252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.519380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.519413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.519588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.519622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.519827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.460 [2024-11-20 19:04:27.519861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.460 qpair failed and we were unable to recover it. 00:27:05.460 [2024-11-20 19:04:27.520055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.520088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.520270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.520305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.520515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.520547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.520816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.520849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.521030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.521063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.521238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.521273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.521402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.521434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.521537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.521570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.521764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.521797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.521971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.522004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.522194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.522249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.522361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.522405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.522545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.522577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.522763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.522795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.522982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.523014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.523229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.523264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.523454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.523485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.523691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.523730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.523982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.524014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.524124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.524156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.524421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.524456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.524591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.524625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.524822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.524854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.525120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.525153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.525289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.525345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.525556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.525589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.525781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.525814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.525999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.526032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.526174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.526218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.526397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.526430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.526533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.526567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.526811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.526843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.527044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.527077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.527250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.527285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.527526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.527559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.461 [2024-11-20 19:04:27.527697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.461 [2024-11-20 19:04:27.527731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.461 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.527904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.527937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.528125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.528158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.528357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.528390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.528523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.528556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.528681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.528714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.528892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.528925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.529046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.529078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.529197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.529239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.529485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.529519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.529639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.529673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.529881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.529915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.530174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.530232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.530376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.530409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.530653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.530687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.530864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.530896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.531154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.531187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.531329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.531362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.531536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.531570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.531690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.531722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.531901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.531934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.532111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.532144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.532265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.532305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.532482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.532515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.532729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.532762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.532957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.532990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.533230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.533265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.533437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.533471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.533645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.533679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.533975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.534007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.534248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.534282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.534475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.534508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.534709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.534742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.535010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.535042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.535161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.462 [2024-11-20 19:04:27.535194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.462 qpair failed and we were unable to recover it. 00:27:05.462 [2024-11-20 19:04:27.535326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.535359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.535632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.535669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.535852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.535885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.536013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.536046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.536318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.536352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.536524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.536556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.536739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.536773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.536947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.536980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.537165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.537198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.537394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.537427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.537613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.537646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.537841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.537874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.538066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.538098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.538275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.538310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.538557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.538592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.538771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.538804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.538984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.539017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.539216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.539250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.539464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.539497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.539614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.539647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.539886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.539920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.540098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.540130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.540321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.540355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.540531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.540564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.540683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.540716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.540885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.540917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.541187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.541228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.541399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.541433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.541657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.541690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.541864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.541896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.542037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.542071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.542255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.542290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.542406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.542439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.542621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.542654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.463 [2024-11-20 19:04:27.542835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-11-20 19:04:27.542870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.463 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.542975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.543008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.543183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.543225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.543420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.543454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.543578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.543611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.543712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.543746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.543927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.543959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.544161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.544194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.544469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.544502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.544684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.544718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.544837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.544870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.545049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.545083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.545221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.545255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.545537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.545572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.545769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.545803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.546063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.546097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.546222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.546257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.546453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.546488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.546758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.546791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.547055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.547089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.547292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.547333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.547522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.547555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.547686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.547720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.547906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.547940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.548231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.548266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.548459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.548492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.548705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.548739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.548921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.548955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.549139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.549171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.549405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.549439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.464 [2024-11-20 19:04:27.549630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-11-20 19:04:27.549663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.464 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.549786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.549819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.549997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.550031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.550157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.550190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.550397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.550431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.550573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.550605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.550867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.550900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.551097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.551130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.551381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.551416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.551538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.551838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.551871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.551989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.552022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.552220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.552254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.552357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.552390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.552566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.552599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.552895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.552929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.553110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.553142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.553291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.553326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.553512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.553545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.553814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.553846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.554032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.554065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.554190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.554232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.554427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.554459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.554667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.554700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.554883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.554916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.555095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.555128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.555398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.555433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.555536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.555569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.555745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.555779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.556048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.556080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.556188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.556243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.556435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.556468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.556668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.556701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.556919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.556952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.557166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.557199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.557335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-11-20 19:04:27.557369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.465 qpair failed and we were unable to recover it. 00:27:05.465 [2024-11-20 19:04:27.557489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.557522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.557651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.557684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.557948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.557981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.558224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.558259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.558443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.558476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.558693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.558727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.558896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.558929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.559033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.559067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.559197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.559267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.559530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.559563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.559736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.559769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.559894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.559928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.560115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.560149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.560269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.560303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.560483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.560517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.560700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.560734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.560926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.560959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.561157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.561191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.561399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.561432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.561540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.561574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.561767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.561806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.562056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.562089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.562223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.562258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.562501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.562535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.562709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.562742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.563003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.563036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.563144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.563177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.563390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.563424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.563609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.563641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.563886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.563919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.564163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.564196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.564381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.564414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.564522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.564554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.564764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.564797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.565010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.466 [2024-11-20 19:04:27.565049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.466 qpair failed and we were unable to recover it. 00:27:05.466 [2024-11-20 19:04:27.565234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.565269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.565409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.565443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.565624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.565656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.565851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.565885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.566098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.566132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.566246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.566280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.566396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.566441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.566645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.566678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.566797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.566829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.567095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.567129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.567375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.567415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.567604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.567637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.567829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.567865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.568148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.568181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.568378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.568412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.568596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.568628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.568888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.568921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.569113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.569146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.569348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.569382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.569495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.569528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.569781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.569814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.570013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.570045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.570289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.570323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.570593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.570626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.570802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.570835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.571028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.571060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.571307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.571342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.571613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.571652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.571773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.571806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.571982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.572014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.572147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.572179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.572370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.572405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.572594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.572627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.467 qpair failed and we were unable to recover it. 00:27:05.467 [2024-11-20 19:04:27.572809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.467 [2024-11-20 19:04:27.572841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.573035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.573068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.573189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.573228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.573400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.573433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.573632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.573664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.573784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.573817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.573924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.573977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.574225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.574260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.574387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.574421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.574602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.574634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.574764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.574797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.575067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.575101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.575232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.575266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.575392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.575424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.575531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.575563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.575669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.575701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.575836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.575869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.576041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.576074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.576284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.576318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.576534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.576566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.576761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.576794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.576931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.576964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.577200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.577240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.577443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.577475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.577596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.577640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.577910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.577942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.578069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.578100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.578293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.578326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.578503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.578536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.578718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.578751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.578926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.578958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.579147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.579181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.579409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.579443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.579575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.468 [2024-11-20 19:04:27.579608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.468 qpair failed and we were unable to recover it. 00:27:05.468 [2024-11-20 19:04:27.579793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.579825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.580011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.580044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.580183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.580223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.580370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.580402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.580644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.580677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.580849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.580881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.581056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.581089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.581252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.581287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.581470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.581503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.581681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.581714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.581965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.581997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.582182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.582232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.582408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.582447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.582572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.582605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.582789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.582822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.583065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.583097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.583270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.583305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.583484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.583517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.583785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.583818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.583925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.583958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.584221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.584255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.584380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.584413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.584589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.584623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.584763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.584796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.584982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.585015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.585196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.585236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.585456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.585490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.585671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.585704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.469 [2024-11-20 19:04:27.585861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.469 [2024-11-20 19:04:27.585896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.469 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.586135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.586168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.586382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.586417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.586603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.586637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.586914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.586947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.587224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.587258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.587451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.587484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.587677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.587709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.587827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.587860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.588041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.588075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.588271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.588305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.588483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.588516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.588700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.588733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.588998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.589030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.589250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.589284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.589402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.589436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.589636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.589668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.589861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.589894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.590012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.590045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.590236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.590269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.590396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.590429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.590615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.590647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.590829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.590862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.590992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.591025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.591222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.591262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.591435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.591468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.591660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.591693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.591822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.591855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.592043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.592075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.592222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.592256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.592364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.592396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.592577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.592609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.592713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.592745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.592932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.470 [2024-11-20 19:04:27.592965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.470 qpair failed and we were unable to recover it. 00:27:05.470 [2024-11-20 19:04:27.593142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.593174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.593426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.593460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.593567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.593600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.593777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.593816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.593996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.594029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.594223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.594258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.594379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.594410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.594539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.594572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.594820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.594853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.595065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.595097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.595292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.595325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.595449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.595482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.595749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.595781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.595958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.595991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.596169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.596212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.596451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.596484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.596630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.596663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.596881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.596914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.597185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.597224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.597440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.597472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.597665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.597697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.597938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.597970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.598234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.598267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.598378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.598411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.598618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.598650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.598833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.598865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.599107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.599140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.599338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.599371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.599558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.599591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.599732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.599763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.600036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.600074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.600267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.600302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.600437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.471 [2024-11-20 19:04:27.600468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.471 qpair failed and we were unable to recover it. 00:27:05.471 [2024-11-20 19:04:27.600669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.600702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.600895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.600928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.601052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.601085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.601259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.601293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.601477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.601509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.601700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.601733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.601928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.601961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.602171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.602224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.602424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.602456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.602584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.602616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.602820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.602852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.602983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.603015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.603209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.603243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.603364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.603397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.603667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.603699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.603889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.603921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.604130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.604162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.604373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.604407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.604704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.604738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.604858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.604890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.605103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.605137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.605323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.605357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.605535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.605570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.605783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.605995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.606029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.606146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.606179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.606455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.606488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.606665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.606698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.606838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.606872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.607001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.607033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.607246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.607282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.607488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.607521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.607791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.607824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.608044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.472 [2024-11-20 19:04:27.608077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.472 qpair failed and we were unable to recover it. 00:27:05.472 [2024-11-20 19:04:27.608344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.608379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.608518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.608550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.608662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.608695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.608888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.608926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.609060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.609094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.609287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.609322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.609530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.609562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.609758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.609791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.609913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.609946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.610191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.610231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.610418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.610450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.610579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.610611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.610739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.610772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.610949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.610982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.611264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.611299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.611515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.611547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.611663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.611697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.611948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.611981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.612224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.612257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.612377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.612409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.612627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.612661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.612771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.612803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.613095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.613134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.613269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.613304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.613430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.613461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.613662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.613695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.613900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.613933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.614111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.614142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.614414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.614449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.614664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.614696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.615035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.615107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.615398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.615437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.473 [2024-11-20 19:04:27.615565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.473 [2024-11-20 19:04:27.615600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.473 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.615797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.615832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.616142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.616180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.616477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.616510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.616774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.616807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.616946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.616978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.617224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.617266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.617471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.617508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.617743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.617776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.617948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.617980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.618094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.618128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.618310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.618353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.618468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.618501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.618694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.618727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.618910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.618942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.619058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.619092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.619194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.619237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.619360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.619394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.619659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.619692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.619810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.619843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.619966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.619997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.620283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.620317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.620506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.620539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.620806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.620839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.621065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.621099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.621256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.621290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.621425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.621457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.621640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.621673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.621919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.621953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.622124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.622156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.622404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.622439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.622575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.622609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.622798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.622832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.623073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.474 [2024-11-20 19:04:27.623105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.474 qpair failed and we were unable to recover it. 00:27:05.474 [2024-11-20 19:04:27.623295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.623330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.623452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.623492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.623616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.623650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.623759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.623792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.624013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.624050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.624270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.624306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.624487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.624520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.624667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.624700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.624896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.624941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.625129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.625162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.625279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.625312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.625505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.625539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.625729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.625762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.625954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.625987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.626175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.626231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.626407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.626441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.626574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.626606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.626788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.626826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.627019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.627052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.627170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.627212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.627404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.627436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.627609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.627642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.627750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.627782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.627900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.627935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.628040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.628073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.628186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.628227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.628493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.628526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.628696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.475 [2024-11-20 19:04:27.628730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.475 qpair failed and we were unable to recover it. 00:27:05.475 [2024-11-20 19:04:27.628919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.628952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.629071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.629105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.629342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.629377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.629565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.629605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.629748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.629781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.629899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.629932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.630175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.630217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.630340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.630373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.630578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.630610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.630817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.630850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.630969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.631003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.631107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.631139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.631333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.631367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.631556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.631588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.631707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.631740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.631950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.631983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.632220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.632254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.632370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.632402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.632513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.632545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.632764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.632798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.632919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.632951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.633229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.633263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.633450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.633484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.633666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.633698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.633833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.633872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.633994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.634027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.634218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.634252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.634361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.634395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.634527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.634560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.634734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.634782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.634893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.634926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.635135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.635168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.635299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.635335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.476 [2024-11-20 19:04:27.635460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.476 [2024-11-20 19:04:27.635492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.476 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.635758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.635791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.635914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.635947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.636147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.636180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.636379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.636411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.636547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.636581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.636770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.636802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.636997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.637030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.637217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.637251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.637432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.637465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.637661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.637694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.637935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.637969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.638089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.638123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.638299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.638334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.638517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.638550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.638675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.638708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.638897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.638930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.639153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.639362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.639395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.639661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.639694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.639945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.639977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.640171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.640396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.640428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.640595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.640667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.640869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.640907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.641100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.641135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.641405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.641439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.641697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.641730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.641861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.641895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.642163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.642198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.642361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.642396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.642531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.642564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.642693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.642726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.477 [2024-11-20 19:04:27.642839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.477 [2024-11-20 19:04:27.642871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.477 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.643046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.643080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.643194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.643239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.643424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.643463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.643663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.643693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.643960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.643991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.644118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.644148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.644341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.644373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.644502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.644532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.644724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.644754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.644962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.644992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.645106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.645137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.645265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.645297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.645494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.645526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.645792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.645823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.646063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.646094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.646285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.646321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.646608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.646641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.646825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.646858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.647055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.647088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.647227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.647261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.647539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.647572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.647847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.647880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.648059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.648090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.648222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.648254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.648384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.648416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.648657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.648690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.648958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.648990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.649103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.649140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.649319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.649357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.649538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.649576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.649706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.649737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.649862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.649894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.650137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.650170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:05.478 qpair failed and we were unable to recover it. 00:27:05.478 [2024-11-20 19:04:27.650381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.478 [2024-11-20 19:04:27.650417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.650604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.650637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.650740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.650773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.650948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.650980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.651228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.651266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.651442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.651474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.651746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.651779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.651894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.651927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.652188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.652228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.652443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.652477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.652665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.652705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.652890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.652922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.653123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.653156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.653428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.653462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.653597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.653630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.653856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.653891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.654109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.654142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.654327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.654361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.654604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.654636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.654824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.654856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.654972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.655005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.655327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.655521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.655554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.655685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.655719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.655958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.655991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.656177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.656220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.656356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.656389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.656651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.656685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.656924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.656957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.657137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.657170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.657392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.657427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.657733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.657766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.657963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.657996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.479 [2024-11-20 19:04:27.658200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.479 [2024-11-20 19:04:27.658239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.479 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.658362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.658395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.658590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.658623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.658887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.658926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.659036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.659068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.659239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.659272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.659448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.659481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.659692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.659726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.659970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.660003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.660193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.660247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.660417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.660551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.660584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.660874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.660906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.661117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.661151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.661349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.661383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.661505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.661538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.661778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.661812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.661996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.662030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.662249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.662285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.662421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.662454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.662630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.662663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.662781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.662820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.662945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.662978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.663245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.663279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.663497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.663529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.663727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.663760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.663879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.663912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.480 [2024-11-20 19:04:27.664108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.480 [2024-11-20 19:04:27.664141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.480 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.664337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.664378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.664554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.664585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.664714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.664749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.664943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.664975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.665159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.665192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.665318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.665353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.665527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.665559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.665802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.665834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.666013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.666052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.666319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.666354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.666476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.666509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.666684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.666716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.666910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.666942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.667156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.667189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.667407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.667443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.667677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.667718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.667851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.667884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.668151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.668184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.668474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.668511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.668693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.668725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.668989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.669022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.669223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.669259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.669510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.669549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.669739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.669772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.669963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.669997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.670241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.670276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.670536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.670569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.670743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.670776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.670889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.670922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.671224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.671259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.671447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.671480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.481 qpair failed and we were unable to recover it. 00:27:05.481 [2024-11-20 19:04:27.671668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.481 [2024-11-20 19:04:27.671709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.671900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.671934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.672120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.672152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.672414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.672448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.672577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.672611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.672794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.672827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.673079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.673112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.673218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.673252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.673358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.673392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.673652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.673685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.673883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.673916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.674240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.674275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.674497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.674531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.674641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.674674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.674922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.674955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.675129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.675163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.675293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.675328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.675502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.675535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.675717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.675749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.675946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.675980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.676183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.676227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.676474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.676507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.676775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.676809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.677077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.677110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.677284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.677324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.677537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.677570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.677812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.677846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.678039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.678072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.678341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.678376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.678561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.678595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.678768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.678801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.678940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.678982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.679161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.679195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.679403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.482 [2024-11-20 19:04:27.679437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.482 qpair failed and we were unable to recover it. 00:27:05.482 [2024-11-20 19:04:27.679681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.679714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.679903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.679936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.680059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.680092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.680334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.680368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.680569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.680602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.680780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.680813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.681001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.681034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.681222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.681256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.681441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.681476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.681714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.681747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.681919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.681951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.682247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.682281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.682457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.682491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.682624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.682658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.682831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.682864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.683036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.683069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.683282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.683317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.683570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.683604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.683890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.683923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.684196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.684237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.684379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.684412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.684597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.684631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.684760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.684794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.684910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.684944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.685120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.685153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.685378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.685412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.685600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.685633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.685968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.686001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.686127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.686160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.686293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.686327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.483 [2024-11-20 19:04:27.686535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.483 [2024-11-20 19:04:27.686574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.483 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.686699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.686733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.686859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.686892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.687133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.687166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.687374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.687408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.687581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.687790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.687823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.687930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.687964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.688228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.688263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.688448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.688481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.688672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.688705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.688897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.688930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.689196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.689236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.689430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.689463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.689614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.689649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.689848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.689881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.689999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.690032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.690232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.690267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.690518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.690552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.690727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.690760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.690975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.691008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.691277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.691312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.691431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.691464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.691652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.691685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.691893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.691926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.692167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.692199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.692387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.692421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.692641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.692675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.692917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.692950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.693171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.693213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.693448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.693481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.693725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.693758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.693978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.694011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.694256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.484 [2024-11-20 19:04:27.694291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.484 qpair failed and we were unable to recover it. 00:27:05.484 [2024-11-20 19:04:27.694401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.694439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.694690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.694723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.695009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.695042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.695242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.695277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.695529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.695563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.695669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.695701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.695919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.695958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.696151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.696184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.696376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.696410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.696676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.696708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.697002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.697035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.697224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.697258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.697391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.697423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.697619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.697653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.697875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.697908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.698151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.698184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.698451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.698485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.698754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.698793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.698982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.699016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.699234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.699268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.699467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.699501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.699616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.699648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.699822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.699861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.700039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.700072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.485 [2024-11-20 19:04:27.700258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.485 [2024-11-20 19:04:27.700292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.485 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.700432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.700465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.700729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.700761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.701029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.701061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.701184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.701226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.701440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.701473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.701596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.701629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.701761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.701794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.701915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.701948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.702234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.702269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.702472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.702505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.702770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.702802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.702921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.702955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.703212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.703247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.703513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.703546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.703744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.703777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.703996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.704029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.704155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.704188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.704421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.704596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.704637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.704812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.704846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.705091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.705126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.705301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.705342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.705608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.705641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.705889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.705922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.706058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.706091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.706279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.706313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.706422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.706455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.706741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.706773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.706992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.707024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.707199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.707241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.707423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.707455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.707660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.707694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.707869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.486 [2024-11-20 19:04:27.707902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.486 qpair failed and we were unable to recover it. 00:27:05.486 [2024-11-20 19:04:27.708047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.708080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.708275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.708308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.708500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.708538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.708730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.708766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.708890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.708923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.709163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.709196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.709399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.709432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.709556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.709589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.709708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.709741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.710010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.710042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.710180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.710231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.710416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.710450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.710639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.710672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.710862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.710895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.711130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.711334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.711554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.711586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.711763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.711795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.711914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.711946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.712184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.712225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.712400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.712433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.712675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.712707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.712881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.712913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.713049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.713081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.713196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.713239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.713409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.713441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.713629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.713667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.713862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.713895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.714136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.714174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.714363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.714396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.487 [2024-11-20 19:04:27.714522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.487 [2024-11-20 19:04:27.714554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.487 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.714681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.714721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.714988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.715020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.715219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.715252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.715428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.715470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.715733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.715765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.715958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.715990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.716181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.716222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.716405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.716443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.716616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.716648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.716823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.716855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.716981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.717013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.717239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.717276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.717406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.717440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.717614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.717646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.717767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.717798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.718070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.718103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.718367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.718402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.718538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.718572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.718852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.718884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.719061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.719093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.719302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.719335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.719522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.719556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.719748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.719780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.720048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.720080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.720220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.720256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.720446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.720478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.720744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.720776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.720966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.720998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.721230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.721265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.721390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.721423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.721598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.721630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.721761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.721794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.488 [2024-11-20 19:04:27.722029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.488 [2024-11-20 19:04:27.722061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.488 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.722249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.722283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.722469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.722502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.722749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.722782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.722968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.723000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.723130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.723179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.723461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.723494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.723738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.723771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.723951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.723984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.724172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.724211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.724321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.724354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.724616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.724650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.724896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.724929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.725119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.725153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.725285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.725319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.725445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.725478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.725744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.725776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.726021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.726054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.726179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.726219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.726421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.726454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.726714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.726748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.726940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.726972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.727090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.727124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.727310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.727344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.727478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.727510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.727649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.727682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.727952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.727985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.728158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.728191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.728329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.728363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.728557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.728589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.728882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.728915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.729024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.729065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.729261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.729296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.489 [2024-11-20 19:04:27.729565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.489 [2024-11-20 19:04:27.729598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.489 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.729724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.729757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.729877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.729910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.730106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.730140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.730324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.730358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.730469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.730502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.730694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.730733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.730842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.730874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.731057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.731091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.731267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.731301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.731510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.731542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.731716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.731750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.731980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.732019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.732272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.732305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.732486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.732519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.732784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.732817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.732989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.733022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.733265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.733315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.733525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.733558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.733757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.733789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.733966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.733999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.734264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.734297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.734417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.734450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.734638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.734671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.734845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.734877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.735123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.735156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.735356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.735391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.735602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.735635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.735841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.735874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.736069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.736103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.736411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.736444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.736725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.736757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.736950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.736983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.737251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.490 [2024-11-20 19:04:27.737285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.490 qpair failed and we were unable to recover it. 00:27:05.490 [2024-11-20 19:04:27.737461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.737505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.737713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.737746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.737947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.737980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.738167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.738200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.738346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.738379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.738588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.738621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.738750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.738783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.738972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.739005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.739208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.739242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.739499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.739531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.739678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.739712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.739911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.739943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.740082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.740116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.740324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.740358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.740530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.740563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.740690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.740724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.740856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.740889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.741016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.741049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.741226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.741267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.741453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.741486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.741749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.741782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.741906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.741939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.742124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.742158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.742358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.742392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.742660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.742694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.742886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.742919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.743099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.743131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.743251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.491 [2024-11-20 19:04:27.743285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.491 qpair failed and we were unable to recover it. 00:27:05.491 [2024-11-20 19:04:27.743470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.743504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.743745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.743777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.744036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.744070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.744222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.744256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.744453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.744487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.744593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.744626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.744871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.744904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.745147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.745180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.745395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.745429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.745552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.745586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.745773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.745806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.745995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.746027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.746224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.746260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.746445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.746477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.746661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.746694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.746883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.746916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.747087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.747120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.747368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.747403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.747591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.747624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.747814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.747846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.748047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.748081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.748349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.748384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.748560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.748593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.748859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.748892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.749017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.749050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.749287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.749322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.749513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.749547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.749741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.749774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.750035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.750068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.750261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.750295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.750489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.750528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.750730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.750763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.750892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.750938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.751057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.751089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.751303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.751337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.751609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.492 [2024-11-20 19:04:27.751641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.492 qpair failed and we were unable to recover it. 00:27:05.492 [2024-11-20 19:04:27.751856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.751890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.752073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.752106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.752282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.752316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.752440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.752473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.752711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.752744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.752927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.752960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.753091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.753125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.753364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.753398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.753578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.753611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.753787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.753820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.754061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.754098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.754309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.754344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.754628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.754660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.754872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.754905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.755084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.755116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.755363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.755398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.755641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.755674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.755941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.755976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.756164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.756197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.756316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.756350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.756472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.756506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.756695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.756767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.756916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.756954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.757076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.757111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.757234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.757269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.757543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.757741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.757774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.757886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.757918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.758026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.758059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.758222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.758256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.758502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.758535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.758726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.758758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.758998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.759030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.759153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.759192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.759321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.759353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.759628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.759660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.759800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.759831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.760007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.493 [2024-11-20 19:04:27.760040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.493 qpair failed and we were unable to recover it. 00:27:05.493 [2024-11-20 19:04:27.760237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.760271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.760462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.760494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.760739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.760771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.760926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.760958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.761159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.761192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.761404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.761437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.761617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.761650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.761890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.761922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.762054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.762086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.762265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.762299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.762471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.762510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.762631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.762663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.762837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.762869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.763043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.763077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.763263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.763298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.763488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.763519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.763717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.763749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.763993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.764025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.764215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.764248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.764439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.764472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.494 [2024-11-20 19:04:27.764719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.494 [2024-11-20 19:04:27.764752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.494 qpair failed and we were unable to recover it. 00:27:05.773 [2024-11-20 19:04:27.764939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-11-20 19:04:27.764971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.773 qpair failed and we were unable to recover it. 00:27:05.773 [2024-11-20 19:04:27.765096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-11-20 19:04:27.765128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.773 qpair failed and we were unable to recover it. 00:27:05.773 [2024-11-20 19:04:27.765253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-11-20 19:04:27.765286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.773 qpair failed and we were unable to recover it. 00:27:05.773 [2024-11-20 19:04:27.765559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-11-20 19:04:27.765591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.773 qpair failed and we were unable to recover it. 00:27:05.773 [2024-11-20 19:04:27.765770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-11-20 19:04:27.765803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.765983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.766016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.766235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.766272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.766475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.766508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.766707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.766740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.766926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.766958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.767133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.767166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.767289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.767322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.767595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.767629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.767816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.767848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.768041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.768073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.768253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.768294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.768417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.768457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.768719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.768752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.768963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.768995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.769262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.769296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.769475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.769508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.769718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.769750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.769893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.769926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.770131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.770163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.770348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.770381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.770557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.770595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.770717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.770757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.770861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.770893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.771079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.771110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.771230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.771264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.771539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.771571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.771752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.771784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.771906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.771939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.772191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.772234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.772369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.772402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.772674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.772707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.772829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.772861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.773136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.773169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.773427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.773500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.773763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.773801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.774 [2024-11-20 19:04:27.774047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.774 [2024-11-20 19:04:27.774081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.774 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.774374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.774410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.774667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.774701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.774894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.774936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.775125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.775158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.775415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.775449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.775635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.775669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.775855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.775889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.776088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.776120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.776313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.776347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.776587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.776620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.776901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.776933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.777118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.777151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.777368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.777403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.777582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.777615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.777786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.777820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.778012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.778045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.778306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.778342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.778593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.778626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.778888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.778921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.779160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.779193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.779395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.779429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.779617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.779652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.779869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.779903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.780154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.780197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.780455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.780490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.780772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.780805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.780990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.781025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.781223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.781259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.781531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.781566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.781838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.781878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.782021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.782053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.782249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.775 [2024-11-20 19:04:27.782284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.775 qpair failed and we were unable to recover it. 00:27:05.775 [2024-11-20 19:04:27.782549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.782583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.782849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.782882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.783067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.783100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.783399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.783434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.783614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.783653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.783841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.783874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.784007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.784040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.784232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.784266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.784530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.784563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.784701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.784735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.784977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.785010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.785145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.785178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.785337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.785370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.785553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.785587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.785776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.785810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.786061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.786094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.786271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.786305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.786521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.786555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.786742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.786775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.786951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.787003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.787128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.787161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.787346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.787380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.787579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.787613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.787750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.787782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.788036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.788074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.788271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.788306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.788492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.788524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.788643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.788676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.788856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.788889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.789074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.789107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.789383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.789417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.789546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.789578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.789689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.789722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.789943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.789976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.790086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.790119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.776 qpair failed and we were unable to recover it. 00:27:05.776 [2024-11-20 19:04:27.790306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.776 [2024-11-20 19:04:27.790341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.790534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.790567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.790809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.790842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.791088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.791120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.791301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.791335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.791518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.791552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.791819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.791852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.792050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.792084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.792293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.792328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.792508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.792540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.792732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.792765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.792899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.792932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.793069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.793109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.793245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.793279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.793529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.793562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.793686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.793719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.793935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.793979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.794159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.794191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.794378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.794410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.794593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.794635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.794914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.794947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.795128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.795166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.795316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.795351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.795480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.795512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.795755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.795788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.796056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.796089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.796355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.796390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.796578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.796612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.796753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.796786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.796981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.797013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.797288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.797322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.797552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.797586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.797830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.797863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.798047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.798080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.798321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.798355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.798547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-20 19:04:27.798580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.777 qpair failed and we were unable to recover it. 00:27:05.777 [2024-11-20 19:04:27.798702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.798735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.798988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.799020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.799152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.799185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.799442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.799475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.799652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.799685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.799815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.799847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.800031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.800064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.800255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.800297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.800412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.800445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.800641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.800674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.800829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.800862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.801106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.801139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.801442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.801477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.801681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.801724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.801916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.801949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.802139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.802171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.802359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.802393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.802667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.802699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.802887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.802921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.803099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.803132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.803323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.803357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.803614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.803684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.803964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.804002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.804191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.804237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.804428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.804461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.804623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.804655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.804845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.778 [2024-11-20 19:04:27.804887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.778 qpair failed and we were unable to recover it. 00:27:05.778 [2024-11-20 19:04:27.805072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.805105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.805377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.805411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.805601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.805634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.805747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.805781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.805910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.805943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.806129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.806162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.806298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.806331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.806562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.806601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.806786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.806817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.807001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.807033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.807276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.807310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.807506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.807539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.807751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.807784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.807975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.808009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.808279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.808313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.808487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.808520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.808758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.808790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.808944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.808978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.809241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.809275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.809468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.809500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.809684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.809718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.809918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.809951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.810090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.810123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.810386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.810421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.810607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.810639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.810921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.810954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.811196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.811242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.811431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.811464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.811580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.811613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.811815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.811848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.812067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.812099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.812284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.812320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.812446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.812479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.812608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.812642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.812833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.812874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.779 [2024-11-20 19:04:27.812996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.779 [2024-11-20 19:04:27.813029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.779 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.813151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.813187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.813396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.813430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.813651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.813685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.813814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.813847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.814027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.814059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.814174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.814221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.814378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.814411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.814537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.814570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.814782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.814814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.815059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.815091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.815262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.815295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.815477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.815509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.815714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.815747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.816021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.816053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.816236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.816271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.816513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.816545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.816794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.816826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.817002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.817035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.817288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.817322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.817521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.817553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.817792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.817826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.818058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.818091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.818302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.818336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.818521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.818554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.818682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.818715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.818841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.818880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.819146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.819178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.819375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.819409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.819655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.819687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.819925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.819958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.820217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.820251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.820518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.820550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.820739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.820772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.820955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.820987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.821222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.821258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.780 [2024-11-20 19:04:27.821457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-11-20 19:04:27.821489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.780 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.821663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.821703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.821912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.821945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.822225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.822257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.822466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.822499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.822675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.822707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.822971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.823003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.823183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.823223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.823353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.823386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.823572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.823604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.823805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.823838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.824009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.824042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.824144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.824177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.824444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.824476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.824595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.824628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.824774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.824815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.824986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.825018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.825199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.825245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.825368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.825400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.825598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.825631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.825858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.825890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.826033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.826066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.826252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.826286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.826485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.826516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.826701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.826734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.826986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.827018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.827190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.827233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.827352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.827396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.827622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.827654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.827829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.827864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.828047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.828080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.828294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.828328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.828468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.828501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.828741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-11-20 19:04:27.828774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.781 qpair failed and we were unable to recover it. 00:27:05.781 [2024-11-20 19:04:27.828949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.828982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.829166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.829198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.829417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.829451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.829578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.829610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.829795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.829828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.830010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.830043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.830244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.830281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.830549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.830582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.830693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.830726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.830917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.830950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.831128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.831174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.831478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.831511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.831689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.831722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.831932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.831964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.832151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.832183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.832386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.832420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.832603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.832635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.832754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.832794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.832917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.832953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.833079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.833112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.833303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.833336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.833461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.833494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.833750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.833784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.833979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.834011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.834415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.834449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.834703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.834736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.835024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.835057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.835305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.835338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.835538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.835571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.835838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.835871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.835993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.836025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.836242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.836277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.836469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.836502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.836714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.836746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.836936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.836969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.837182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.837224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.837426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-11-20 19:04:27.837460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.782 qpair failed and we were unable to recover it. 00:27:05.782 [2024-11-20 19:04:27.837635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.837677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.837954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.837987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.838143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.838176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.838478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.838512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.838728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.838761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.838933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.838966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.839158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.839190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.839387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.839421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.839556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.839588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.839694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.839735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.839913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.839946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.840134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.840167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.840359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.840393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.840592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.840625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.840832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.840864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.840993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.841026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.841214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.841247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.841369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.841404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.841592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.841625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.841818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.841851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.842066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.842099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.842226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.842259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.842454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.842487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.842702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.842735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.842996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.843033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.843229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.843261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.843461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.843494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.843738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.843771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.844019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.844052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.844238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.844273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.783 qpair failed and we were unable to recover it. 00:27:05.783 [2024-11-20 19:04:27.844458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-11-20 19:04:27.844490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.844767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.844800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.845060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.845092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.845326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.845360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.845538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.845570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.845763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.845796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.845929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.845962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.846139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.846171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.846369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.846402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.846691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.846723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.846905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.847110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.847149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.847360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.847393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.847588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.847622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.847808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.847840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.848081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.848115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.848360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.848394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.848634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.848667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.848913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.848947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.849142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.849175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.849384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.849418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.849619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.849651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.849859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.849892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.850147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.850180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.850382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.850416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.850571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.850603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.850820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.850853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.851042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.851075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.851341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.851375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.851497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.851529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.851731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.851764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.851887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.851919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.852111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.852144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.852260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.852298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.784 [2024-11-20 19:04:27.852512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.784 [2024-11-20 19:04:27.852546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.784 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.852746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.852778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.852966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.853000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.853255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.853288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.853434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.853472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.853741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.853775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.853979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.854011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.854199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.854238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.854440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.854472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.854656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.854688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.854953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.854985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.855124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.855157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.855401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.855435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.855565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.855598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.855812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.855844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.855960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.855993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.856187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.856226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.856437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.856471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.856599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.856633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.856900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.856933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.857118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.857151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.857427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.857460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.857670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.857703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.857877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.857910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.858176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.858221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.858434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.858466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.858641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.858673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.858791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.858824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.859018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.859050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.859180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.859221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.859348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.859380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.859565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.859598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.859778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.859811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.859985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.860019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.860231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.860266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.860481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.860514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.785 [2024-11-20 19:04:27.860758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.785 [2024-11-20 19:04:27.860790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.785 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.860996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.861029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.861340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.861375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.861564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.861596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.861786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.861819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.861948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.861981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.862164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.862197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.862361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.862394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.862606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.862639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.862888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.862960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.863269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.863309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.863557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.863592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.863841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.863875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.864067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.864101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.864235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.864271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.864378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.864411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.864564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.864598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.864854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.864888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.865023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.865056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.865303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.865339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.865544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.865578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.865789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.865822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.866008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.866051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.866358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.866394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.866530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.866562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.866707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.866741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.866981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.867014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.867191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.867237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.867356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.867389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.867657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.867691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.867931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.867964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.868096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.868129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.868323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.868357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.868538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.868572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.786 [2024-11-20 19:04:27.868842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.786 [2024-11-20 19:04:27.868876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.786 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.869011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.869043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.869225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.869260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.869447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.869480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.869666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.869699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.869824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.869856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.870125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.870159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.870358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.870392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.870532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.870565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.870686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.870720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.870840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.870873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.871155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.871188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.871384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.871418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.871539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.871572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.871782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.871815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.872008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.872046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.872313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.872349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.872602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.872634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.872886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.873105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.873138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.873324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.873358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.873598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.873630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.873751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.873782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.873899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.873930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.874126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.874160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.874304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.874338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.874455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.874489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.874733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.874764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.874945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.874977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.875176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.875217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.875468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.875501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.875691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.875724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.876004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.876036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.876305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.787 [2024-11-20 19:04:27.876345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.787 qpair failed and we were unable to recover it. 00:27:05.787 [2024-11-20 19:04:27.876587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.876619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.876883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.876916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.877046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.877078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.877303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.877337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.877578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.877611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.877877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.877909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.878104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.878136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.878400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.878436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.878542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.878578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.878835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.878868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.879059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.879090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.879259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.879293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.879484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.879517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.879693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.879736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.879988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.880019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.880214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.880252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.880442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.880476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.880678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.880711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.880920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.880953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.881223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.881258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.881398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.881431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.881682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.881716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.881849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.881890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.882007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.882039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.882232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.882267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.882469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.882503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.882749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.882782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.882976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.883018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.883285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.883320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.883586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.883626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.788 qpair failed and we were unable to recover it. 00:27:05.788 [2024-11-20 19:04:27.883743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.788 [2024-11-20 19:04:27.883776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.883974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.884007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.884224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.884258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.884524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.884558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.884798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.884831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.885011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.885050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.885329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.885365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.885563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.885596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.885839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.885874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.886002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.886035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.886298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.886332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.886467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.886501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.886747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.886781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.887023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.887056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.887190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.887233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.887433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.887472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.887659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.887700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.887834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.887868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.888075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.888109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.888362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.888397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.888515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.888549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.888655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.888858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.888889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.889090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.889123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.889254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.889288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.889466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.889499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.889633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.889667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.889862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.889894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.890133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.789 [2024-11-20 19:04:27.890167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.789 qpair failed and we were unable to recover it. 00:27:05.789 [2024-11-20 19:04:27.890483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.890518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.890762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.890797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.890901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.890933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.891120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.891159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.891415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.891450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.891646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.891679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.891873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.891906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.892093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.892126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.892342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.892378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.892550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.892583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.892775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.892809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.892985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.893019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.893147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.893181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.893444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.893477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.893676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.893709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.893821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.893853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.894128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.894161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.894385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.894419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.894623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.894656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.894790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.894824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.895022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.895056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.895188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.895228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.895480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.895514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.895754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.895788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.895966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.895999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.896263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.896298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.896416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.896450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.896634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.896667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.896842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.896876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.897068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.897102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.897301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.897335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.897526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.897560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.897770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.790 [2024-11-20 19:04:27.897803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.790 qpair failed and we were unable to recover it. 00:27:05.790 [2024-11-20 19:04:27.897979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.898012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.898136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.898173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.898414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.898457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.898752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.898785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.898978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.899012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.899281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.899324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.899435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.899475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.899665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.899699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.899885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.899919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.900133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.900166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.900302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.900336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.900604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.900643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.900911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.900943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.901056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.901090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.901369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.901404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.901543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.901579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.901799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.901833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.901973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.902007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.902182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.902234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.902461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.902494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.902680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.902713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.902950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.902983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.903222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.903256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.903445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.903478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.903601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.903635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.903835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.903867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.903990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.904024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.904265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.904299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.904496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.904530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.904719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.904752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.904887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.904920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.905044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.905077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.905352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.791 [2024-11-20 19:04:27.905387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.791 qpair failed and we were unable to recover it. 00:27:05.791 [2024-11-20 19:04:27.905595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.905629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.905758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.905792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.906083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.906116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.906297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.906332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.906526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.906559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.906756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.906800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.907019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.907053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.907255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.907289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.907475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.907508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.907773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.907806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.908004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.908038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.908228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.908262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.908527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.908765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.908798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.909006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.909039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.909224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.909259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.909460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.909494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.909626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.909658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.909947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.909980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.910246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.910281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.910407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.910441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.910706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.910739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.910943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.910976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.911177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.911218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.911410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.911443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.911639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.911673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.911851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.911884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.912064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.912098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.912237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.912271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.912401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.912435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.912560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.912593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.912861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.912893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.913132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.792 [2024-11-20 19:04:27.913171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.792 qpair failed and we were unable to recover it. 00:27:05.792 [2024-11-20 19:04:27.913291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.913326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.913533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.913565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.913698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.913731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.913992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.914025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.914219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.914252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.914495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.914528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.914723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.914756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.914964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.914998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.915260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.915295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.915432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.915466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.915650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.915684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.915823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.915856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.916032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.916065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.916315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.916349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.916469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.916503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.916611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.916644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.916886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.916918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.917036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.917068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.917253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.917288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.917410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.917444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.917583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.917616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.917853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.917887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.918098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.918132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.918309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.918343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.918561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.918595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.918814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.918847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.919041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.919074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.919210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.919245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.919424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.919457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.919700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.919733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.919920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.919953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.920196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.920244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.920487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.920520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.920764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.920802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.920927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.920961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.793 qpair failed and we were unable to recover it. 00:27:05.793 [2024-11-20 19:04:27.921226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.793 [2024-11-20 19:04:27.921261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.921456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.921489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.921720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.921753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.921944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.921988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.922115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.922149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.922371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.922405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.922595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.922649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.922901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.922935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.923128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.923162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.923306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.923340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.923540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.923574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.923813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.923846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.924091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.924124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.924259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.924294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.924548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.924580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.924852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.924885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.925069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.925101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.925342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.925377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.925594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.925627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.925841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.925874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.926094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.926126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.926246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.926281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.926459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.926491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.926612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.926645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.926818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.926852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.927053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.927087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.927231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.927265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.927459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.927491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.927667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.927700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.794 qpair failed and we were unable to recover it. 00:27:05.794 [2024-11-20 19:04:27.927955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.794 [2024-11-20 19:04:27.927987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.928161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.928194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.928395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.928428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.928696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.928735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.928992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.929026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.929283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.929318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.929455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.929486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.929670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.929704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.929918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.929950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.930137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.930170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.930357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.930394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.930658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.930691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.930954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.930987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.931230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.931266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.931453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.931486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.931688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.931727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.931903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.931935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.932152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.932186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.932387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.932421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.932597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.932629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.932749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.932781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.932913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.932945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.933137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.933171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.933425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.933460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.933631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.933663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.933923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.933955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.934172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.934231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.934498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.934531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.934738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.934770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.934953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.934987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.935122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.935160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.935412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.935446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.935633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.935667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.935931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.935964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.936218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.936253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.936453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.936485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.936605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.936639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.795 qpair failed and we were unable to recover it. 00:27:05.795 [2024-11-20 19:04:27.936755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.795 [2024-11-20 19:04:27.936787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.937056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.937095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.937288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.937324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.937588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.937619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.937810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.937842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.938012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.938045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.938242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.938277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.938493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.938526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.938661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.938693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.938832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.938864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.939001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.939033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.939275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.939310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.939444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.939476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.939675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.939709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.939839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.939871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.940199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.940240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.940372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.940404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.940675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.940708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.940890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.940923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.941128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.941161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.941318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.941358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.941575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.941851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.941884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.942003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.942036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.942223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.942257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.942393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.942425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.942556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.942589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.942721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.942754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.942946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.942979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.943251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.943285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.943532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.943564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.943744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.943777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.796 [2024-11-20 19:04:27.943947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.796 qpair failed and we were unable to recover it. 00:27:05.796 [2024-11-20 19:04:27.944147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.944179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.944379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.944422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.944604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.944649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.944779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.944811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.945053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.945086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.945214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.945249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.945440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.945472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.945658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.945691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.945931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.945964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.946183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.946239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.946382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.946415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.946675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.946708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.946893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.946925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.947052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.947085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.947279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.947313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.947495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.947536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.947720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.947754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.947930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.947962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.948132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.948165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.948437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.948472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.948658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.948692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.948808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.948840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.949030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.949063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.949178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.949220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.949419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.949453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.949717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.949750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.950014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.950047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.950322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.950358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.950499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.950537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.950782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.950814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.951000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.951033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.951252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.797 [2024-11-20 19:04:27.951287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.797 qpair failed and we were unable to recover it. 00:27:05.797 [2024-11-20 19:04:27.951481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.951514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.951646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.951679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.951861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.951894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.952075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.952108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.952355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.952390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.952582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.952616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.952887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.952920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.953215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.953249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.953442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.953475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.953662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.953696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.953908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.953949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.954221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.954255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.954454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.954488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.954747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.954779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.954909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.954949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.955088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.955120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.955382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.955417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.955697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.955730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.955918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.955951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.956216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.956250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.956440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.956472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.956677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.956710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.956842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.956875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.957116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.957153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.957366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.957400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.957612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.957644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.957914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.957947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.958135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.958169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.958438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.958471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.958677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.958711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.958980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.959012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.959276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.959310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.959529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.959561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.959689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.959722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.959916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.959948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.960163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.798 [2024-11-20 19:04:27.960195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.798 qpair failed and we were unable to recover it. 00:27:05.798 [2024-11-20 19:04:27.960392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.960425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.960618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.960652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.960869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.960900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.961080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.961113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.961321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.961354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.961462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.961494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.961676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.961709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.961895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.961927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.962050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.962084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.962306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.962341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.962537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.962570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.962764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.962798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.963086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.963119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.963308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.963343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.963469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.963507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.963706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.963740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.963858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.963891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.964100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.964133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.964267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.964300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.964492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.964526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.964699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.964732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.964977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.965010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.965220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.965254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.965513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.965546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.965818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.966057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.966089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.966378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.966412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.966686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.966719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.966917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.799 [2024-11-20 19:04:27.966950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.799 qpair failed and we were unable to recover it. 00:27:05.799 [2024-11-20 19:04:27.967139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.967171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.967383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.967417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.967659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.967692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.967938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.967972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.968158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.968190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.968386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.968419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.968571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.968603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.968724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.968758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.968958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.968991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.969115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.969158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.969356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.969390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.969573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.969607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.969716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.969748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.969986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.970019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.970223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.970257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.970443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.970475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.970719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.970752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.970866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.970898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.971091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.971124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.971387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.971421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.971651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.971684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.971881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.971914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.972088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.972121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.972297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.972332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.972467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.972499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.972761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.972794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.972985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.973018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.973211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.973245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.973440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.973473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.973665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.973698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.973888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.973921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.974105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.974137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.974383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.800 [2024-11-20 19:04:27.974417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.800 qpair failed and we were unable to recover it. 00:27:05.800 [2024-11-20 19:04:27.974547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.974581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.974824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.974856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.974965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.974998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.975194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.975245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.975372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.975404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.975667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.975699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.975918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.975951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.976146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.976180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.976378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.976410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.976631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.976664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.976851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.976885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.977077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.977110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.977233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.977267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.977377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.977410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.977530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.977562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.977738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.977771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.977945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.977978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.978151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.978184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.978405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.978438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.978559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.978591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.978793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.978841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.979085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.979117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.979363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.979397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.979603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.979636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.979903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.979936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.980179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.980220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.980464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.980500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.980692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.980726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.980899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.980940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.981062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.981096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.981295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.981330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.981522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.981556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.981700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.981734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.981854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.981892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.982086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.982120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.982331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.982365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.801 [2024-11-20 19:04:27.982539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.801 [2024-11-20 19:04:27.982572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.801 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.982694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.982728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.982913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.982946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.983193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.983235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.983432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.983467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.983594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.983626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.983825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.983859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.984035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.984069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.984210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.984245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.984431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.984464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.984776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.984809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.985045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.985084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.985218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.985253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.985523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.985556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.985756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.985790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.986015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.986049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.986244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.986279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.986466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.986499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.986764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.986798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.986967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.986999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.987189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.987233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.987442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.987475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.987668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.987701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.987822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.987856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.987966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.988000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.988339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.988583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.988616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.988855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.988888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.989082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.989116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.989360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.989395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.989515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.989549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.989753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.989790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.989980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.802 [2024-11-20 19:04:27.990013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.802 qpair failed and we were unable to recover it. 00:27:05.802 [2024-11-20 19:04:27.990252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.990286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.990393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.990433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.990580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.990614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.990733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.990766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.990895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.990929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.991104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.991143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.991277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.991311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.991518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.991551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.991740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.991774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.991887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.991919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.992110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.992144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.992328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.992363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.992571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.992604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.992794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.992827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.992954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.992987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.993115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.993148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.993399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.993433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.993555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.993589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.993860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.993893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.994091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.994124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.994285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.994320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.994493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.994526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.994817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.994851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.995026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.995059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.995248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.995282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.995404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.995438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.995557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.995590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.995778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.995812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.995995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.996028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.996142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.996175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.996380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.996414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.996552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.996586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.996727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.996760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.996968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.997001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.803 [2024-11-20 19:04:27.997134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.803 [2024-11-20 19:04:27.997167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.803 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.997401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.997435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.997659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.997693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.997937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.997970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.998120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.998158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.998372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.998407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.998613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.998647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.998859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.998892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.999064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.999097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.999298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.999333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.999549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.999582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:27.999762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:27.999804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.000097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.000141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.000340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.000375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.000641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.000674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.000864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.000897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.001148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.001181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.001321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.001355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.001527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.001560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.001739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.001782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.001887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.001929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.002138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.002171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.002352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.002386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.002563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.002597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.002793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.002826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.003063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.003096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.003368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.003402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.003669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.003702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.003895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.003929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.004168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.804 [2024-11-20 19:04:28.004208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.804 qpair failed and we were unable to recover it. 00:27:05.804 [2024-11-20 19:04:28.004404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.004437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.004542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.004575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.004815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.004849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.005032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.005065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.005197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.005238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.005368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.005401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.005589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.005623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.005816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.005848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.006115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.006148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.006276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.006314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.006455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.006488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.006678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.006711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.006905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.006938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.007177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.007220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.007395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.007427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.007529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.007562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.007700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.007733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.007929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.007963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.008104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.008137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.008445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.008480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.008701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.008735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.008947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.008979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.009175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.009216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.009436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.009470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.009714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.009747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.009986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.010019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.010154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.010187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.010390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.010431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.010701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.010734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.010924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.010957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.011223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.011259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.011365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.011398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.805 [2024-11-20 19:04:28.011584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.805 [2024-11-20 19:04:28.011618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.805 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.011813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.011847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.012032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.012065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.012306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.012341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.012581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.012619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.012887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.013145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.013185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.013324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.013359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.013613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.013647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.013833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.013865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.014062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.014094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.014237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.014270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.014565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.014599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.014720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.014752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.014886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.014919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.015191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.015233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.015500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.015533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.015661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.015694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.015945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.015979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.016225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.016260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.016392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.016425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.016544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.016577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.016828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.016861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.017083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.017116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.017254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.017289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.017415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.017448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.017634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.017667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.017838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.017871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.018076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.018109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.018316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.018351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.018525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.018558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.018737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.018770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.018973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.019007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.019118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.019152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.019301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.019336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.019523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.019556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.806 [2024-11-20 19:04:28.019772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.806 [2024-11-20 19:04:28.019804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.806 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.020024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.020057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.020183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.020226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.020414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.020447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.020589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.020622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.020811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.020848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.021089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.021121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.021294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.021328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.021446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.021480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.021734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.021767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.021953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.021986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.022236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.022271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.022412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.022445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.022672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.022706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.022836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.022870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.022992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.023024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.023149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.023182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.023436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.023468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.023650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.023683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.023797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.023830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.024038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.024071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.024336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.024369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.024505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.024538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.024810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.024843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.025027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.025060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.025297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.025330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.025503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.025536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.025748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.025781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.026075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.026108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.026263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.026297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.026511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.026545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.026717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.026750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.027014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.027047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.027251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.027286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.027423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.807 [2024-11-20 19:04:28.027456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.807 qpair failed and we were unable to recover it. 00:27:05.807 [2024-11-20 19:04:28.027580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.027616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.027803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.027853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.027978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.028010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.028199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.028239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.028380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.028413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.028607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.028641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.028763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.028802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.029048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.029081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.029269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.029303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.029487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.029521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.029719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.029751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.030001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.030035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.030252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.030286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.030464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.030499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.030763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.030797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.030926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.030960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.031151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.031184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.031325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.031359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.031535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.031568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.031810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.031843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.031985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.032018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.032294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.032327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.032519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.032552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.032739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.032772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.032981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.033015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.033298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.033333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.033443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.033476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.033726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.033758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.033883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.033921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.034045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.034077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.034279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.034312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.034499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.034531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.034722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.034753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.035022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.035055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.035243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.035277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.808 qpair failed and we were unable to recover it. 00:27:05.808 [2024-11-20 19:04:28.035460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.808 [2024-11-20 19:04:28.035493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.035733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.035766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.035898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.035930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.036047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.036080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.036270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.036303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.036443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.036477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.036738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.036769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.036945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.036978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.037224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.037258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.037478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.037510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.037690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.037724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.037975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.038009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.038211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.038244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.038509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.038542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.038763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.038796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.038978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.039011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.039186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.039247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.039455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.039488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.039752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.039785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.039976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.040007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.040118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.040159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.040428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.040463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.040679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.040713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.040850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.040883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.041168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.041210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.041402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.041436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.041611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.041644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.041765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.041798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.042088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.042121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.042313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.042348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.042468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.042501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.809 [2024-11-20 19:04:28.042747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.809 [2024-11-20 19:04:28.042780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.809 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.043038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.043071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.043282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.043317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.043513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.043546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.043795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.043835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.044101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.044135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.044313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.044347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.044558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.044592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.044808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.044841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.045022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.045053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.045336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.045370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.045521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.045554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.045797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.045830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.045946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.045983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.046109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.046142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.046330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.046364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.046577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.046611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.046811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.046845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.047020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.047053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.047172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.047225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.047362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.047396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.047548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.047581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.047844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.047878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.048145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.048177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.048378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.048412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.048680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.048712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.048818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.048859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.049104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.049137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.049266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.049301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.049542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.049575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.049748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.049786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.049901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.049934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.050142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.050176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.050368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.050401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.810 qpair failed and we were unable to recover it. 00:27:05.810 [2024-11-20 19:04:28.050690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.810 [2024-11-20 19:04:28.050722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.050913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.050946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.051074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.051107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.051243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.051284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.051463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.051496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.051759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.051792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.051908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.051940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.052115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.052149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.052295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.052328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.052514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.052547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.052684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.052716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.052992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.053025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.053176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.053218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.053409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.053447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.053690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.053723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.053837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.053874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.054165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.054198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.054449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.054483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.054609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.054642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.054913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.054946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.055228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.055262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.055531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.055565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.055746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.055779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.056045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.056084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.056224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.056260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.056466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.056500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.056630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.056664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.056905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.056938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.057192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.057234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.057446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.057479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.057651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.057684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.057798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.057830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.058017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.058050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.058252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.058294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.058477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.058510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.058755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.811 [2024-11-20 19:04:28.058788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.811 qpair failed and we were unable to recover it. 00:27:05.811 [2024-11-20 19:04:28.059086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.059118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.059275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.059310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.059624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.059658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.059845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.059878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.060118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.060151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.060427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.060462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.060671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.060704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.060881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.060914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.061142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.061176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.061462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.061504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.061793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.061835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.062031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.062064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.062273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.062307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.062502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.062738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.062776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.063053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.063086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.063224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.063258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.063501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.063534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.063659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.063693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.063821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.063855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.064031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.064064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.064208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.064242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.064368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.064401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.064601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.064635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.064840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.064872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.065086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.065119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.065410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.065444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.065637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.065670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.065894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.065927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.066069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.066102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.066343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.066378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.066580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.066613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.066836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.066869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.067056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.067101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.067288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.812 [2024-11-20 19:04:28.067322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.812 qpair failed and we were unable to recover it. 00:27:05.812 [2024-11-20 19:04:28.067449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.067481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.067662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.067696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.068010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.068043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.068150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.068183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.068406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.068440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.068684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.068717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.068924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.068957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.069155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.069188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.069386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.069420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.069661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.069693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.069911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.069945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.070229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.070263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.070457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.070490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.070733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.070766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.071035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.071069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.071305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.071340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.071537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.071570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.071811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.071844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.072037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.072070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.072336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.072370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.072562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.072605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.072882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.072914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.073124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.073357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.073399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.073591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.073624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.073817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.073850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.074101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.074134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.074397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.074431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.074608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.074642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.074865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.074898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.075096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.075129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.075375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.075409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.075541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.075574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.075710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.075744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.076017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.076050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.813 [2024-11-20 19:04:28.076165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.813 [2024-11-20 19:04:28.076198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.813 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.076332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.076366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.076555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.076587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.076777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.076811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.077078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.077112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.077266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.077300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.077486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.077519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.077649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.077681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.077922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.077955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.078224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.078258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.078390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.078424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:05.814 [2024-11-20 19:04:28.078548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.814 [2024-11-20 19:04:28.078581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:05.814 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.078848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.078887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.079074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.079106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.079252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.079286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.079491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.079524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.079738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.079770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.079880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.079920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.080053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.080085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.080224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.080257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.080525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.080561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.080698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.080732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.080910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.080942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.081131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.081164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.081300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.081335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.081478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.081512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.081701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.081734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.092 [2024-11-20 19:04:28.081863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.092 [2024-11-20 19:04:28.081897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.092 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.082015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.082048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.082167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.082218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.082461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.082495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.082737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.082770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.082956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.082989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.083161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.083193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.083432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.083465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.083657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.083690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.083930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.083963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.084096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.084129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.084339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.084373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.084642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.084681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.084870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.084904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.085095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.085128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.085347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.085381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.085641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.085674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.085849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.085883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.086074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.086107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.086284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.086318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.086508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.086542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.086717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.086751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.086991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.087024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.087134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.087167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.087373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.087407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.087670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.087703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.087902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.087935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.088127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.088160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.088363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.088397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.088657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.088690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.088952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.088985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.089228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.089263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.089446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.089479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.093 [2024-11-20 19:04:28.089618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.093 [2024-11-20 19:04:28.089651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.093 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.089852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.089885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.090020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.090053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.090171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.090213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.090398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.090431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.090640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.090672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.090799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.090832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.091031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.091065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.091214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.091248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.091429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.091462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.091753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.091786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.091985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.092018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.092146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.092180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.092372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.092404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.092593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.092626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.092763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.092796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.092929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.092962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.093228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.093263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.093443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.093476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.093664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.093697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.093900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.093933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.094122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.094311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.094344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.094555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.094588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.094763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.094796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.094943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.094976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.095273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.095307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.095492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.095525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.095704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.095738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.096028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.096061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.096169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.096211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.096455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.096488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.096677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.096711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.096892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.096925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.097043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.094 [2024-11-20 19:04:28.097076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.094 qpair failed and we were unable to recover it. 00:27:06.094 [2024-11-20 19:04:28.097251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.097286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.097532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.097565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.097738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.097771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.097959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.097992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.098237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.098272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.098461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.098494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.098709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.098742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.098917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.098951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.099227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.099261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.099388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.099421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.099540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.099573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.099752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.099791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.100007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.100045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.100288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.100321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.100506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.100545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.100695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.100728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.100996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.101029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.101301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.101336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.101529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.101563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.101783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.101819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.102013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.102054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.102322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.102366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.102584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.102617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.102858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.102891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.103104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.103137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.103344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.103378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.103587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.103619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.103794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.103826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.103959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.103993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.104185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.104225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.104352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.104386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.104581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.104614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.104881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.104914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.105089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.105122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.105318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.095 [2024-11-20 19:04:28.105354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.095 qpair failed and we were unable to recover it. 00:27:06.095 [2024-11-20 19:04:28.105488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.105528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.105651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.105685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.105872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.105907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.106091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.106124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.106372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.106412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.106587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.106621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.106864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.106897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.107016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.107049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.107238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.107273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.107463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.107496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.107615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.107649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.107831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.107871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.108060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.108092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.108295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.108329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.108523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.108557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.108698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.108731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.108902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.108935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.109131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.109165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.109424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.109458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.109648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.109680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.109855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.109888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.110142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.110176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.110310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.110344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.110531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.110564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.110734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.110767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.110983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.111015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.111151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.111185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.111370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.111404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.111585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.111617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.111743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.111776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.111963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.111996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.112224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.112263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.112437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.096 [2024-11-20 19:04:28.112470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.096 qpair failed and we were unable to recover it. 00:27:06.096 [2024-11-20 19:04:28.112657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.112690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.112866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.112898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.113085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.113355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.113391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.113633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.113667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.113888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.113921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.114036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.114068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.114325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.114360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.114476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.114510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.114722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.114755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.114951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.114984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.115180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.115230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.115422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.115456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.115629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.115663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.115910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.115947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.116138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.116171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.116420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.116456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.116704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.116737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.116872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.116905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.117169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.117403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.117437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.117569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.117603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.117788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.117821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.118015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.118049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.118245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.118280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.118480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.118513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.118731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.118765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.119049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.119082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.119209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.119242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.119433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.119466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.097 [2024-11-20 19:04:28.119654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.097 [2024-11-20 19:04:28.119687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.097 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.119890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.119923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.120043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.120084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.120344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.120380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.120569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.120603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.120741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.120774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.120926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.120959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.121209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.121249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.121384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.121421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.121623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.121657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.121849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.121884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.122078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.122113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.122236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.122273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.122464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.122497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.122605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.122638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.122850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.122884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.123036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.123249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.123284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.123458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.123492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.123761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.123793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.123902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.123935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.124125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.124163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.124353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.124392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.124576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.124609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.124853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.124886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.125027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.125060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.125351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.125385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.125521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.125554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.125670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.125703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.125833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.125866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.126045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.126078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.126364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.126399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.126596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.126629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.126879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.126912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.127039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.098 [2024-11-20 19:04:28.127072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.098 qpair failed and we were unable to recover it. 00:27:06.098 [2024-11-20 19:04:28.127194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.127235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.127410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.127450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.127623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.127656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.127861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.127894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.128077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.128111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.128298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.128333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.128543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.128583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.128702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.128736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.128915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.128949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.129130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.129164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.129363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.129396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.129568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.129602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.129730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.129764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.129994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.130029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.130218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.130253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.130383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.130416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.130615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.130651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.130827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.130862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.131111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.131145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.131281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.131316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.131503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.131543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.131734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.131768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.131953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.131986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.132104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.132138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.132324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.132359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.132505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.132538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.132734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.132767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.132968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.133001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.133212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.133253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.133439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.133472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.133680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.133713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.133846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.133880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.134066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.134100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.134226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.134259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.134436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.134469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.134658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.134691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.134869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.134902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.135088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.099 [2024-11-20 19:04:28.135121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.099 qpair failed and we were unable to recover it. 00:27:06.099 [2024-11-20 19:04:28.135245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.135280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.135500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.135535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.135709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.135741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.135936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.135969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.136246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.136281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.136412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.136445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.136637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.136671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.136845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.136878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.136999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.137033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.137235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.137270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.137376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.137407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.137523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.137555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.137658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.137690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.137813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.137845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.137953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.138000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.138107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.138140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.138255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.138288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.138536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.138569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.138824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.138858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.138977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.139010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.139134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.139167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.139371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.139406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.139539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.139572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.139701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.139734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.139929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.139962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.140067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.140100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.140222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.140257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.140365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.140399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.140590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.140624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.140732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.140765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.141096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.141129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.141258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.141293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.141400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.141432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.141664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.141697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.141804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.141837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.142012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.142044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.142239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.142274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.142470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.142502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.100 qpair failed and we were unable to recover it. 00:27:06.100 [2024-11-20 19:04:28.142691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.100 [2024-11-20 19:04:28.142724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.142844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.142877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.142995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.143028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.143246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.143280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.143490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.143525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.143715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.143747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.143937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.143970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.144223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.144258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.144453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.144485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.144591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.144624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.144746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.144779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.144961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.144995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.145195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.145239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.145354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.145387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.145510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.145543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.145714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.145747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.145919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.145951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.146088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.146121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.146300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.146335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.146521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.146555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.146661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.146700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.146878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.147082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.147115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.147250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.147285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.147548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.147581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.147696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.147729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.147917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.147949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.148077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.148110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.148315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.148349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.148538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.148571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.148690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.148723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.148898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.148930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.149102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.149136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.149322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.149377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.149634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.149668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.149946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.150168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.150210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.150349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.150383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.101 [2024-11-20 19:04:28.150553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.101 [2024-11-20 19:04:28.150587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.101 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.150719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.150752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.151037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.151070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.151272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.151307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.151573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.151607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.151864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.151898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.152082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.152115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.152325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.152360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.152601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.152635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.152776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.152814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.153082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.153116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.153372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.153407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.153610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.153643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.153900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.153933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.154108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.154142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.154361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.154396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.154589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.154623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.154795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.154829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.155108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.155141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.155341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.155375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.155573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.155607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.155782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.155815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.156056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.156089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.156294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.156329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.156503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.156537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.156723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.156758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.156999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.157033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.157216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.157251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.157388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.157422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.157688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.157721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.158002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.158036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.102 [2024-11-20 19:04:28.158310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.102 [2024-11-20 19:04:28.158345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.102 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.158602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.158635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.158826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.158860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.159003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.159037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.159178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.159229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.159421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.159461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.159711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.159745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.159919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.159951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.160180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.160222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.160416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.160450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.160721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.160755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.160999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.161033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.161300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.161335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.161478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.161512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.161634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.161667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.161859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.161892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.162014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.162048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.162256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.162291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.162412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.162445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.162624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.162697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.162938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.162976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.163190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.163239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.163388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.163423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.163563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.163596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.163860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.163893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.164175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.164217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.164486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.164519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.164698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.164732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.165002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.165035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.165298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.165332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.165564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.165598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.165872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.165905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.166090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.166133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.166399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.166433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.166565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.166599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.166805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.166838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.167053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.167086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.167215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.167250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.103 [2024-11-20 19:04:28.167530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.103 [2024-11-20 19:04:28.167564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.103 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.167683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.167716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.167979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.168014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.168199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.168258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.168398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.168431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.168548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.168581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.168692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.168726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.168846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.168879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.169141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.169174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.169378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.169415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.169687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.169720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.169918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.169952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.170220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.170254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.170472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.170506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.170792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.170825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.171012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.171045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.171289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.171326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.171514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.171548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.171689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.171722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.171986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.172020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.172243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.172278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.172547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.172587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.172810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.172843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.173112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.173146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.173427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.173462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.173634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.173668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.173869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.173904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.174016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.174050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.174248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.174283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.174402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.174436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.174564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.174598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.174785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.174818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.175101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.175136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.175329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.175365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.175588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.175622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.175818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.175852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.176026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.176060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.176274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.176309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.176575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.104 [2024-11-20 19:04:28.176609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.104 qpair failed and we were unable to recover it. 00:27:06.104 [2024-11-20 19:04:28.176896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.176930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.177180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.177221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.177413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.177447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.177718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.177752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.177959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.177993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.178219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.178255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.178454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.178705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.178739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.178924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.178958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.179225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.179266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.179544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.179579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.179771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.179805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.180000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.180034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.180211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.180247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.180433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.180467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.180578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.180609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.180818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.180854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.181092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.181126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.181361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.181395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.181633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.181667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.181802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.181835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.182101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.182134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.182399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.182433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.182721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.182756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.183026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.183059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.183239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.183275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.183461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.183495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.183757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.183790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.183964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.183998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.184144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.184178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.184459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.184493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.184767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.184800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.185086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.185119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.185394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.185429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.185707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.185741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.185917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.185951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.186127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.186167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.105 [2024-11-20 19:04:28.186418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.105 [2024-11-20 19:04:28.186453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.105 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.186744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.186778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.187047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.187080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.187366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.187401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.187670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.187704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.187945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.187978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.188272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.188306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.188597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.188631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.188898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.188932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.189211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.189246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.189502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.189536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.189828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.189863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.190125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.190159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.190379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.190414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.190651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.190685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.190975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.191008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.191247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.191282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.191469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.191503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.191612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.191644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.191814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.191847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.192092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.192125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.192373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.192408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.192651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.192684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.192879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.192912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.193118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.193152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.193399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.193433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.193631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.193665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.193788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.193821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.194019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.194053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.194312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.194346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.194587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.194620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.194884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.194917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.195156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.195191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.195457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.195490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.195665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.195698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.195943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.195978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.196177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.196219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.106 qpair failed and we were unable to recover it. 00:27:06.106 [2024-11-20 19:04:28.196401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.106 [2024-11-20 19:04:28.196434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.196676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.196710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.196832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.196865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.197130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.197169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.197358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.197394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.197658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.197692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.197887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.197920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.198121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.198155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.198428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.198465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.198708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.198743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.198950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.198983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.199253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.199287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.199546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.199581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.199774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.199807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.199994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.200028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.200293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.200329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.200463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.200498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.200748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.200782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.200991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.201026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.201225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.201259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.201514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.201548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.201829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.201863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.202003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.202036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.202301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.202336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.202516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.202549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.202670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.202704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.202955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.202989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.203304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.203579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.203615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.203801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.203836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.204019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.204058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.204253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.204289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.204554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.204588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.204842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.204876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.205076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.205109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.205374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.107 [2024-11-20 19:04:28.205410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.107 qpair failed and we were unable to recover it. 00:27:06.107 [2024-11-20 19:04:28.205613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.205647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.205898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.205932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.206224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.206259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.206524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.206557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.206843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.206877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.207149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.207183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.207337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.207372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.207662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.207696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.207980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.208013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.208219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.208255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.208378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.208412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.208686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.208720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.208987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.209021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.209283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.209317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.209439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.209470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.209714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.209747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.209938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.209972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.210219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.210254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.210499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.210533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.210717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.210752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.211020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.211053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.211230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.211271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.211513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.211548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.211815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.211849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.212051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.212084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.212331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.212365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.212629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.212663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.212851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.212885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.213147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.213181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.213453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.213487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.213769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.213803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.214086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.214120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.214393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.214428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.214546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.214580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.214843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.214876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.215122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.215156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.215385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.215420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.215602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.108 [2024-11-20 19:04:28.215635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.108 qpair failed and we were unable to recover it. 00:27:06.108 [2024-11-20 19:04:28.215836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.215870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.216148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.216182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.216450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.216484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.216741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.216775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.216996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.217030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.217222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.217257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.217520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.217555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.217684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.217717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.217980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.218014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.218305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.218340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.218484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.218518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.218812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.218847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.219112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.219147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.219396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.219430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.219717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.219751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.219958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.219992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.220231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.220266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.220556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.220590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.220782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.220816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.221055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.221090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.221329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.221366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.221609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.221643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.221938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.222192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.222237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.222509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.222543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.222848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.222882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.223160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.223193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.223487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.223522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.223783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.223815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.224056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.224089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.224331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.224367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.224634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.224667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.224937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.224970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.225174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.225213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.225453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.225486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.225729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.225761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.226017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.226050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.226340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.109 [2024-11-20 19:04:28.226375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.109 qpair failed and we were unable to recover it. 00:27:06.109 [2024-11-20 19:04:28.226638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.226671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.226915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.226948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.227241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.227275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.227461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.227495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.227671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.227705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.227947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.227979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.228119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.228153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.228405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.228440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.228696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.228729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.229013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.229048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.229323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.229358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.229628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.229662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.229930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.229965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.230224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.230265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.230531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.230564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.230741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.230778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.231062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.231097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.231338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.231374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.231659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.231694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.231963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.231998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.232175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.232229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.232420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.232454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.232694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.232728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.232925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.232959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.233198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.233243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.233529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.233563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.233849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.233882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.234127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.234161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.234423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.234458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.234635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.234669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.234850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.234884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.235130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.235164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.235445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.235479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.235718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.235752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.235994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.236027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.236466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.236505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.236764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.236800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.237046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.237080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.110 qpair failed and we were unable to recover it. 00:27:06.110 [2024-11-20 19:04:28.237291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.110 [2024-11-20 19:04:28.237325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.237583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.237617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.237910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.237951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.238189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.238230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.238364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.238399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.238640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.238674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.238913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.238946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.239121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.239154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.239462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.239496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.239783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.239816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.239992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.240025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.240278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.240313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.240602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.240635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.240903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.240938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.241228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.241263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.241527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.241560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.241758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.241792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.242034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.242066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.242363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.242398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.242593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.242626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.242868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.242901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.243091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.243125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.243312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.243348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.243524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.243557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.243846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.243880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.244148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.244181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.244463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.244498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.244696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.244730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.244914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.244949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.245137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.245170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.245451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.245485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.245591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.245625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.245865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.111 [2024-11-20 19:04:28.245899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.111 qpair failed and we were unable to recover it. 00:27:06.111 [2024-11-20 19:04:28.246032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.246066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.246276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.246311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.246594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.246627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.246893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.246926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.247136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.247170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.247430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.247464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.247750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.247783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.247977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.248011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.248272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.248307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.248518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.248551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.248803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.248837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.249093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.249127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.249264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.249298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.249565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.249599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.249786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.249817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.249930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.249964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.250138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.250171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.250449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.250483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.250725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.250758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.251025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.251058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.251331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.251366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.251546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.251580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.251779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.251813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.252098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.252130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.252405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.252439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.252708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.252741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.253012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.253046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.253306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.253576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.253609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.253851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.253885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.254094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.254128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.254395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.254430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.254671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.254704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.254891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.254925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.255116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.255149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.255435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.255471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.255730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.255763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.255954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.112 [2024-11-20 19:04:28.255993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.112 qpair failed and we were unable to recover it. 00:27:06.112 [2024-11-20 19:04:28.256195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.256243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.256494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.256528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.256810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.256844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.257037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.257070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.257323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.257358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.257572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.257605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.257853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.257887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.258085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.258118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.258387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.258423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.258708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.258741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.259011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.259045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.259233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.259268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.259531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.259566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.259762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.259796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.260058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.260092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.260361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.260397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.260682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.260716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.260899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.260933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.261139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.261173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.261376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.261410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.261608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.261642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.261830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.261864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.262137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.262171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.262308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.262344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.262484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.262518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.262786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.262841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.263111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.263150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.263452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.263487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.263733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.263767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.264008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.264041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.264334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.264369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.264572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.264605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.264841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.264875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.265114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.265147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.265441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.265475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.265676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.265709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.265974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.266008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.266297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.266332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.113 [2024-11-20 19:04:28.266512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.113 [2024-11-20 19:04:28.266546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.113 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.266807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.266840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.267056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.267360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.267396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.267592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.267625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.267811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.267845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.268110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.268144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.268372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.268407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.268605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.268638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.268926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.268959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.269250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.269286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.269554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.269587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.269875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.269910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.270174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.270215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.270464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.270498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.270745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.270784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.270897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.270932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.271247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.271431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.271464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.271718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.271753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.272041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.272074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.272346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.272382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.272571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.272605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.272877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.272911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.273164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.273198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.273428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.273461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.273726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.273759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.273877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.273911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.274180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.274224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.274499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.274533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.274804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.274838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.275127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.275161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.275428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.275463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.275666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.275700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.275952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.275985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.276110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.276145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.276427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.276462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.276709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.276742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.114 [2024-11-20 19:04:28.276946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.114 [2024-11-20 19:04:28.276979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.114 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.277157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.277213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.277479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.277513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.277786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.277819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.278016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.278050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.278301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.278337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.278540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.278574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.278774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.278809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.279099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.279132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.279315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.279351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.279540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.279574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.279782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.279816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.280061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.280094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.280293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.280330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.280596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.280629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.280829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.280865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.281114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.281149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.281354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.281389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.281571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.281604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.281737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.281772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.281973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.282009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.282140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.282175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.282455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.282489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.282733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.282767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.282952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.282987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.283246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.283283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.283506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.283541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.283807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.283841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.284100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.284135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.284387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.284422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.284672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.284707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.284997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.285032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.285225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.285260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.285503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.285537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.285804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.285839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.286128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.286162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.286430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.286465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.286607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.286641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.286821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.115 [2024-11-20 19:04:28.286855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.115 qpair failed and we were unable to recover it. 00:27:06.115 [2024-11-20 19:04:28.287135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.287170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.287357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.287393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.287515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.287549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.287819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.287852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.288121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.288155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.288433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.288468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.288755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.288795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.289060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.289094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.289349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.289385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.289629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.289663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.289908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.289943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.290188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.290232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.290353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.290386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.290630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.290664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.290843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.290877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.290997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.291031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.291311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.291347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.291618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.291846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.291881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.292082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.292116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.292335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.292371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.292591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.292624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.292811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.292846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.293063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.293097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.293369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.293689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.293723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.294012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.294047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.294256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.294291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.294490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.294523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.294793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.294827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.295113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.295146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.295352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.295388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.295577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.295610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.116 [2024-11-20 19:04:28.295789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.116 [2024-11-20 19:04:28.295830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.116 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.296077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.296111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.296383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.296419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.296690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.296725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.297003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.297036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.297314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.297350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.297647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.297681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.297944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.297979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.298181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.298224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.298503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.298537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.298803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.298837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.299127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.299162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.299385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.299420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.299623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.299656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.299903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.299936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.300123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.300156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.300443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.300477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.300599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.300633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.300883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.300917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.301192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.301242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.301536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.301570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.301764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.301799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.302047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.302082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.302343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.302380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.302581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.302616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.302805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.302840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.303033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.303067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.303313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.303349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.303565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.303598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.303776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.303810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.304053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.304087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.304288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.304324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.304504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.304537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.304732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.304767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.304943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.304977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.305177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.305219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.305397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.305429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.305628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.305661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.117 [2024-11-20 19:04:28.305905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.117 [2024-11-20 19:04:28.305939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.117 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.306221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.306257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.306504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.306538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.306790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.306824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.307070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.307104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.307403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.307440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.307701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.307735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.307945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.307978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.308234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.308270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.308542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.308577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.308844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.308877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.309076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.309109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.309366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.309401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.309673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.309705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.309900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.309934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.310079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.310112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.310358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.310393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.310669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.310704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.310899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.310934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.311122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.311156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.311394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.311429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.311646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.311680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.311860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.311895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.312167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.312213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.312422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.312457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.312702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.312735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.312925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.312958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.313235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.313271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.313542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.313576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.313716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.313750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.313993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.314033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.314225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.314261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.314535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.314569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.314704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.314738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.314982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.315016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.315242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.315278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.315466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.315500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.315647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.315681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.315926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.118 [2024-11-20 19:04:28.315959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.118 qpair failed and we were unable to recover it. 00:27:06.118 [2024-11-20 19:04:28.316167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.316211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.316470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.316503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.316626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.316661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.316860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.316894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.317164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.317199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.317419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.317453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.317723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.317757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.318029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.318063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.318351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.318386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.318583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.318618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.318877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.318911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.319127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.319161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.319354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.319388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.319587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.319621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.319836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.319871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.320172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.320214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.320448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.320482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.320720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.320754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.320932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.320972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.321223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.321260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.321507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.321542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.321768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.321802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.322068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.322101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.322283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.322318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.322495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.322529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.322847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.322880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.323080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.323113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.323306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.323341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.323527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.323560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.323809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.323843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.324024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.324058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.324331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.324366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.324589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.324624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.324824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.324858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.325102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.325136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.325385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.325421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.325717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.325751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.325957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.325991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.119 qpair failed and we were unable to recover it. 00:27:06.119 [2024-11-20 19:04:28.326265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.119 [2024-11-20 19:04:28.326301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.326578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.326611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.326806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.326840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.327086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.327120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.327408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.327445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.327717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.327751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.328038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.328073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.328347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.328389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.328666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.328700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.328956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.328989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.329180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.329222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.329494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.329529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.329803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.329837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.330082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.330116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.330302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.330338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.330589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.330622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.330815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.330851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.331128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.331165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.331466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.331501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.331760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.331795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.331989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.332022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.332293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.332329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.332599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.332634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.332935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.332970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.333233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.333269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.333531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.333565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.333844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.333879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.334077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.334110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.334292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.334329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.334519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.334553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.334803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.334838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.335090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.335124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.335419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.335455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.120 [2024-11-20 19:04:28.335746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.120 [2024-11-20 19:04:28.335780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.120 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.336045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.336079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.336377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.336413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.336679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.336713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.336975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.337010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.337210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.337246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.337506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.337540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.337843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.337878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.338000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.338034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.338235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.338270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.338561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.338870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.338905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.339189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.339241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.339443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.339479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.339664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.339698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.339955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.339991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.340267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.340303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.340582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.340617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.340746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.340781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.341030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.341064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.341314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.341349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.341611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.341645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.341830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.341864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.342140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.342170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.342376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.342704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.342737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.343006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.343039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.343289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.343344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.343638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.343672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.343882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.343916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.344239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.344274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.344552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.344585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.344868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.344902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.345180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.345223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.345499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.345532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.345812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.345847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.346128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.346163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.346317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.346352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.346604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.121 [2024-11-20 19:04:28.346638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.121 qpair failed and we were unable to recover it. 00:27:06.121 [2024-11-20 19:04:28.346911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.346946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.347196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.347259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.347501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.347537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.347663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.347703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.347988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.348022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.348155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.348189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.348423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.348458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.348748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.348782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.348993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.349029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.349345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.349380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.349658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.349693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.349837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.349872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.350064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.350098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.350362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.350398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.350582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.350615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.350873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.350907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.351189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.351234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.351474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.351507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.351785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.351819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.352102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.352135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.352341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.352377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.352631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.352665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.352858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.352891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.353111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.353144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.353474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.353509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.353711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.353745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.354046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.354080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.354224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.354260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.354409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.354443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.354724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.354758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.354947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.354987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.355265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.355299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.355583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.355618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.355764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.355798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.356050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.356084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.356355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.356392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.356596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.356631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.356829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.122 [2024-11-20 19:04:28.356863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.122 qpair failed and we were unable to recover it. 00:27:06.122 [2024-11-20 19:04:28.357133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.357167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.357381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.357417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.357626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.357659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.357883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.357916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.358214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.358255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.358521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.358553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.358841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.358876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.359151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.359184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.359473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.359508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.359776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.359811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.360097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.360130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.360371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.360407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.360612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.360646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.360928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.360962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.361095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.361327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.361361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.361639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.361673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.361946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.361981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.362289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.362324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.362603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.362637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.362919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.362954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.363237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.363273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.363555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.363589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.363867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.363901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.364190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.364237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.364502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.364535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.364824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.364857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.365132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.365165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.365427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.365462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.365716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.365750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.366039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.366073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.366219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.366255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.366442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.366476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.366760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.366795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.367056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.367092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.367394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.367431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.367687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.367721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.367911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.367944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.123 qpair failed and we were unable to recover it. 00:27:06.123 [2024-11-20 19:04:28.368232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.123 [2024-11-20 19:04:28.368270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.368580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.368614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.368893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.368929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.369116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.369150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.369495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.369755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.369788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.370019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.370056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.370305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.370342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.370649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.370683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.370932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.370967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.371231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.371267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.371450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.371483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.371759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.371794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.372075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.372109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.372395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.372431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.372707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.372741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.373027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.373061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.373342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.373378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.373578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.373612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.373798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.373833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.374061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.374095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.374218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.374254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.374508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.374548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.374820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.374853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.375136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.375170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.375339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.375374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.375562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.375596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.375825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.375860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.376167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.376213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.376501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.376536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.376832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.376866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.377164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.377386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.377421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.377730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.377763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.377950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.377984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.378126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.378160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.378449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.378485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.378766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.378800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.124 qpair failed and we were unable to recover it. 00:27:06.124 [2024-11-20 19:04:28.378991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.124 [2024-11-20 19:04:28.379025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.379235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.379270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.379494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.379528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.379715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.379750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.379952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.379986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.380264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.380301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.380487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.380521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.380727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.380762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.381027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.381065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.381258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.381295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.381500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.381535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.381722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.381762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.381949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.381985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.382195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.382240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.382450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.382484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.382693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.382729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.383010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.383047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.383329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.383365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.383563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.383599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.383801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.383835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.384045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.384079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.384266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.384302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.384607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.384642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.384865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.384899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.385102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.385136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.385476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.385512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.385767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.385802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.386007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.386042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.386314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.386352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.386610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.386644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.386918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.386954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.387246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.387284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.387547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.387583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.387880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.125 [2024-11-20 19:04:28.387915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.125 qpair failed and we were unable to recover it. 00:27:06.125 [2024-11-20 19:04:28.388178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.388224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.388507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.388542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.388677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.388711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.388984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.389019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.389248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.389291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.389499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.389535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.389791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.389826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.390054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.390090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.390364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.390401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.390595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.390628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.390890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.390924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.391135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.391171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.391456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.391492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.391608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.391640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.391863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.391896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.392177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.392223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.392521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.392839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.392874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.393222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.393300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.393611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.393651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.393962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.393999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.394222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.394258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.394542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.394577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.394780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.394813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.395020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.395054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.395284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.395319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.395608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.395643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.395851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.395887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.396108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.396143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.396354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.396390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.396648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.396684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.396889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.396934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.397120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.397155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.397449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.397485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.397708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.397743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.398000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.398034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.398174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.398220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.398453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.126 [2024-11-20 19:04:28.398488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.126 qpair failed and we were unable to recover it. 00:27:06.126 [2024-11-20 19:04:28.398773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.127 [2024-11-20 19:04:28.398807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.398995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.399029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.399340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.399377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.399636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.399670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.399949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.399984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.400270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.400306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.400487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.400522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.400789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.400824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.401018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.401054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.401286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.401322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.401601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.401636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.401836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.401872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.402176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.402222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.402354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.402390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.402673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.402708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.402969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.403005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.403283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.403320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.405 [2024-11-20 19:04:28.403623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.405 [2024-11-20 19:04:28.403657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.405 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.403886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.403922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.404118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.404152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.404472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.404508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.404723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.404759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.404969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.405003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.405141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.405176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.405386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.405423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.405635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.405670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.405862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.405897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.406115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.406150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.406370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.406406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.406691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.406725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.406918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.406953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.407176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.407225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.407419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.407454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.407756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.407797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.408057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.408092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.408224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.408263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.408466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.408500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.408725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.408759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.409001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.409036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.409171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.409214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.409417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.409452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.409745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.409780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.410053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.410087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.410294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.410330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.410601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.410636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.410922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.410958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.411239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.411275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.411436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.411471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.411790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.411982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.412017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.412225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.406 [2024-11-20 19:04:28.412261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.406 qpair failed and we were unable to recover it. 00:27:06.406 [2024-11-20 19:04:28.412517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.412551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.412760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.412794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.412982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.413017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.413288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.413324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.413554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.413589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.413779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.413814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.413946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.413979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.414127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.414162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.414447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.414481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.414756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.414790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.415007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.415042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.415247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.415284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.415584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.415618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.415903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.415938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.416222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.416258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.416472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.416507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.416813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.416847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.417130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.417165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.417476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.417512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.417772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.417806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.418109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.418143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.418435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.418471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.418745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.418786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.419005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.419040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.419268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.419303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.419580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.419614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.419823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.419858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.420160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.420194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.420354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.420389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.420612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.420646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.420952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.420986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.421122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.421158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.407 qpair failed and we were unable to recover it. 00:27:06.407 [2024-11-20 19:04:28.421444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.407 [2024-11-20 19:04:28.421478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.421751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.421786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.421915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.421950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.422138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.422173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.422472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.422508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.422697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.422732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.422997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.423031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.423245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.423282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.423475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.423510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.423630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.423665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.423945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.423980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.424265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.424301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.424595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.424630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.424865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.424900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.425184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.425227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.425500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.425536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.425767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.425802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.426036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.426071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.426266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.426303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.426489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.426524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.426723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.426759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.427031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.427067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.427344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.427380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.427683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.427717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.427983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.428017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.428319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.428356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.428616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.428651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.428850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.428884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.429069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.429104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.429366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.429403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.429601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.429642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.429911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.429946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.430198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.430244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.430387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.430422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.430698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.430733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.408 [2024-11-20 19:04:28.430915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.408 [2024-11-20 19:04:28.430950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.408 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.431174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.431226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.431508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.431542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.431808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.431844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.432138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.432172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.432329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.432365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.432651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.432686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.432913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.432948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.433226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.433263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.433558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.433593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.433866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.433901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.434186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.434231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.434497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.434532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.434725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.434759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.435041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.435075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.435340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.435378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.435586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.435620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.435830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.435865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.436175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.436221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.436502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.436537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.436837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.436872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.437139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.437174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.437470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.437507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.437692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.437726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.438006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.438040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.438255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.438292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.438549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.438584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.438861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.438896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.439114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.439149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.439426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.439462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.439668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.439703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.439902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.439937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.440190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.440237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.440524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.440559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.440842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.440876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.409 qpair failed and we were unable to recover it. 00:27:06.409 [2024-11-20 19:04:28.441162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.409 [2024-11-20 19:04:28.441213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.441411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.441446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.441576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.441610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.441907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.441942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.442254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.442291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.442594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.442629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.442887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.442921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.443235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.443271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.443472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.443508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.443771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.443805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.444088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.444123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.444444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.444480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.444778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.444813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.445024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.445059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.445322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.445357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.445650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.445684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.445873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.445908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.446186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.446232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.446506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.446541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.446823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.446858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.447137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.447171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.447370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.447405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.447689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.447724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.448004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.448039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.448263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.448299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.448515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.448550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.448770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.410 [2024-11-20 19:04:28.448805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.410 qpair failed and we were unable to recover it. 00:27:06.410 [2024-11-20 19:04:28.448959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.448996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.449217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.449255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.449536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.449571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.449841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.449875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.450136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.450170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.450475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.450510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.450768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.450804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.450932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.450969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.451156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.451190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.451465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.451501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.451770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.451805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.451990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.452024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.452227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.452263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.452546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.452582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.452858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.452893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.453127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.453162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.453434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.453470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.453730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.453764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.454029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.454065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.454247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.454284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.454484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.454519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.454729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.454764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.455006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.455041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.455172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.455215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.455496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.455531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.455826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.455861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.456128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.456163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.456416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.456454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.456758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.456794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.457076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.457110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.457390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.457428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.457706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.457739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.458030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.458066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.458258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.458294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.458481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.458516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.458795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.411 [2024-11-20 19:04:28.458830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.411 qpair failed and we were unable to recover it. 00:27:06.411 [2024-11-20 19:04:28.459100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.459134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.459368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.459405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.459628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.459662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.459943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.459978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.460263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.460306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.460580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.460615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.460835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.460870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.461143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.461178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.461485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.461521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.461745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.461780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.462061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.462095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.462376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.462413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.462633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.462668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.462953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.462988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.463130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.463165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.463381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.463417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.463720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.463755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.463998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.464035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.464160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.464195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.464491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.464527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.464805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.464840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.465045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.465080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.465246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.465282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.465539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.465574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.465819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.465854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.466047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.466081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.466392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.466429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.466635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.466669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.466946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.466981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.467169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.467230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.467515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.467549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.467873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.467909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.468189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.412 [2024-11-20 19:04:28.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.412 qpair failed and we were unable to recover it. 00:27:06.412 [2024-11-20 19:04:28.468492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.468533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.468744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.468779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.468922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.468957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.469261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.469299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.469504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.469540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.469750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.469787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.470094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.470131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.470304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.470341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.470582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.470616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.470896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.470931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.471149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.471183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.471472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.471514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.471790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.471825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.472034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.472069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.472379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.472415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.472703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.472738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.473042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.473077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.473355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.473392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.473600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.473634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.473893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.473929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.474139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.474173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.474377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.474412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.474672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.474707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.474914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.474948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.475245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.475282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.475568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.475604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.475754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.475788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.475948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.475985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.476192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.476237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.476442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.476477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.476666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.476701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.476904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.476940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.477156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.477193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.477347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.477382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.413 [2024-11-20 19:04:28.477643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.413 [2024-11-20 19:04:28.477678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.413 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.477932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.477967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.478187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.478235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.478366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.478401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.478689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.478725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.478983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.479019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.479295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.479332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.479637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.479671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.479901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.479936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.480193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.480253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.480535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.480571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.480704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.480739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.480943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.480978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.481224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.481261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.481522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.481560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.481918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.481953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.482162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.482198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.482430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.482472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.482682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.482716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.482943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.482978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.483248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.483285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.483573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.483607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.483875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.483910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.484040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.484076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.484325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.484361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.484617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.484652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.484862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.484897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.485172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.485217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.485430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.485464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.485748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.485782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.486023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.486057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.486271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.486308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.486615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.486649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.414 [2024-11-20 19:04:28.486856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.414 [2024-11-20 19:04:28.486891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.414 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.487092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.487127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.487382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.487418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.487614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.487650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.487888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.487923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.488221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.488257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.488468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.488502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.488809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.488843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.489132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.489167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.489433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.489470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.489693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.489728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.489940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.489975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.490276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.490312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.490598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.490633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.490947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.490983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.491241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.491277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.491462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.491497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.491755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.491790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.492117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.492152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.492351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.492387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.492609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.492645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.492902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.492938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.493227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.493265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.493472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.493509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.493768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.493810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.494085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.494120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.494328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.494364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.494620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.415 [2024-11-20 19:04:28.494655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.415 qpair failed and we were unable to recover it. 00:27:06.415 [2024-11-20 19:04:28.494939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.494974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.495216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.495254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.495486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.495522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.495832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.495867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.496111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.496145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.496443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.496479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.496744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.496782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.496984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.497020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.497300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.497339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.497559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.497597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.497878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.497913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.498110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.498145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.498378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.498414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.498619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.498654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.498888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.498923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.499122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.499156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.499357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.499393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.499653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.499690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.499971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.500007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.500221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.500257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.500394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.500430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.500715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.500750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.500979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.501016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.501161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.501196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.501425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.501459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.501598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.501636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.501844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.501878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.502082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.502117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.502374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.502411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.502553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.502589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.502741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.502775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.503056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.503096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.503403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.503440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.503590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.503624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.503881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.503921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.504117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.416 [2024-11-20 19:04:28.504155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.416 qpair failed and we were unable to recover it. 00:27:06.416 [2024-11-20 19:04:28.504423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.504474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.504747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.504781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.505065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.505099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.505332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.505369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.505658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.505694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.505964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.506000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.506142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.506176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.506397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.506432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.506734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.506770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.506912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.506947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.507225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.507262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.507484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.507517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.507665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.507702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.507858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.507893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.508104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.508142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.508364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.508402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.508549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.508585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.508776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.508811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.509090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.509124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.509349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.509385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.509585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.509620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.509856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.509891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.510087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.510122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.510384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.510421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.510623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.510657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.510845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.510879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.511022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.511059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.511272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.511309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.511518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.511555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.511693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.511729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.512037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.512073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.512264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.512300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.512487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.512522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.512804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.512839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.512955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.417 [2024-11-20 19:04:28.512989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.417 qpair failed and we were unable to recover it. 00:27:06.417 [2024-11-20 19:04:28.513200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.513248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.513458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.513492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.513710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.513746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.514024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.514061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.514296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.514333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.514498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.514539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.514727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.514765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.514984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.515020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.515224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.515263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.515474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.515510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.515715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.515749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.515901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.515936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.516128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.516162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.516309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.516346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.516508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.516544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.516757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.516794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.516996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.517030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.517166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.517213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.517415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.517449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.517673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.517710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.517937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.517971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.518255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.518292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.518434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.518469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.518664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.518698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.519004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.519040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.519300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.519337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.519556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.519591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.519898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.519933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.520189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.520235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.520456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.520491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.520777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.520812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.521037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.521076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.521291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.521324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.521581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.418 [2024-11-20 19:04:28.521617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.418 qpair failed and we were unable to recover it. 00:27:06.418 [2024-11-20 19:04:28.521813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.521847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.522124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.522159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.522444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.522479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.522712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.522747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.523023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.523056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.523314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.523352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.523584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.523619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.523837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.523873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.524077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.524112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.524369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.524405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.524613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.524650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.524866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.524908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.525131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.525165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.525386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.525424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.525569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.525605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.525880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.526158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.526194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.526503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.526539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.526733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.526768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.527061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.527096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.527334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.527371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.527627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.527661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.527964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.528000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.528263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.528301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.528441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.528476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.528739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.528775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.529043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.529080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.529287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.529323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.529519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.529554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.529748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.529784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.529989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.530024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.530260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.530296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.530495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.530530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.419 [2024-11-20 19:04:28.530831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.419 [2024-11-20 19:04:28.530866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.419 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.531067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.531101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.531301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.531338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.531533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.531567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.531840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.531875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.532023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.532058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.532246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.532283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.532437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.532471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.532775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.532810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.533066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.533100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.533309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.533345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.533622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.533657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.533912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.533947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.534247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.534283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.534575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.534618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.534877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.534912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.535213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.535250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.535516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.535551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.535843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.535884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.536193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.536241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.536452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.536488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.536703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.536738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.536980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.537014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.537247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.537285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.537430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.537466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.537599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.537634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.537783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.537818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.537953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.537988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.420 [2024-11-20 19:04:28.538183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.420 [2024-11-20 19:04:28.538229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.420 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.538513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.538548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.538749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.538784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.539051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.539086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.539280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.539318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.539584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.539618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.539819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.539853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.540114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.540148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.540305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.540341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.540629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.540664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.540962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.540997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.541243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.541278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.541478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.541513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.541725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.541759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.542015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.542050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.542290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.542327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.542556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.542589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.542835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.542870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.543134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.543169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.543460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.543497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.543719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.543754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.543968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.544003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.544263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.544299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.544448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.544482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.544679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.544715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.544851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.544886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.545006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.545041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.545253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.545290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.545491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.545525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.545653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.545688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.545919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.545960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.546185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.546232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.546437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.421 [2024-11-20 19:04:28.546473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.421 qpair failed and we were unable to recover it. 00:27:06.421 [2024-11-20 19:04:28.546683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.546717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.547017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.547051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.547192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.547237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.547370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.547407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.547684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.547719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.547987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.548022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.548285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.548321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.548594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.548628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.548897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.548933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.549143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.549177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.549345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.549381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.549531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.549566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.549786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.549820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.550088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.550123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.550331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.550367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.550627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.550662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.550915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.550950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.551171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.551218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.551345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.551379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.551625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.551660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.551885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.551920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.552156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.552190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.552354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.552388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.552665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.552700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.552983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.553018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.553220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.553257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.553377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.553413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.553626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.553661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.553791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.553824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.554116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.554150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.554381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.554417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.554564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.554597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.422 qpair failed and we were unable to recover it. 00:27:06.422 [2024-11-20 19:04:28.554883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.422 [2024-11-20 19:04:28.554919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.555079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.555115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.555315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.555351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.555554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.555589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.555800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.555835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.555977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.556022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.556146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.556181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.556355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.556391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.556592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.556627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.556952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.556987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.557186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.557230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.557393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.557428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.557669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.557703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.558015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.558050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.558333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.558371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.558652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.558688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.558972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.559007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.559229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.559265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.559535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.559569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.559836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.559873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.560081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.560117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.560318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.560354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.560566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.560602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.560743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.560778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.560989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.561023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.561163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.561198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.561487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.561522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.561751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.561786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.562042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.562077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.562347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.562383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.562591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.562625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.562956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.562992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.423 qpair failed and we were unable to recover it. 00:27:06.423 [2024-11-20 19:04:28.563279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.423 [2024-11-20 19:04:28.563317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.563600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.563634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.563783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.563818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.564037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.564072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.564359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.564395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.564624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.564660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.564905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.564940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.565132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.565165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.565322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.565379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.565590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.565625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.565836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.565870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.566128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.566163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.566380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.566416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.566672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.566714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.566865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.566899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.567099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.567133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.567353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.567390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.567598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.567632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.567841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.567875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.568075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.568109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.568386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.568423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.568640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.568674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.568914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.568949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.569144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.569179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.569335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.569369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.569581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.569615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.569932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.569967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.570249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.570286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.570473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.570507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.570707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.570741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.570981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.571016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.571225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.571261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.571524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.571560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.571791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.424 [2024-11-20 19:04:28.571827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.424 qpair failed and we were unable to recover it. 00:27:06.424 [2024-11-20 19:04:28.572087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.572122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.572323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.572360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.572494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.572529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.572783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.572818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.573074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.573109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.573346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.573384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.573579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.573614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.573757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.573792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.573914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.573950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.574238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.574275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.574418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.574453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.574753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.574787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.575067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.575102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.575409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.575446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.575659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.575694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.575967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.576003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.576285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.576322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.576533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.576567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.576785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.576820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.577010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.577052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.577310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.577345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.577497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.577532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.577677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.577712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.577968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.578002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.578258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.578296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.578503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.578538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.578863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.578898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.579027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.579061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.579338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.579375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.579583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.579617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.579773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.579809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.580089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.580125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.580411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.580448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.580615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.580651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.425 qpair failed and we were unable to recover it. 00:27:06.425 [2024-11-20 19:04:28.580849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.425 [2024-11-20 19:04:28.580884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.581141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.581175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.581342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.581377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.581588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.581622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.581769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.581804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.582072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.582107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.582270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.582308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.582513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.582548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.582688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.582722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.582973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.583008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.583265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.583300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.583502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.583536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.583778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.583812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.584030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.584065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.584358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.584394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.584555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.584590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.584789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.584824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.585034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.585069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.585356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.585393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.585596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.585630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.585773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.585808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.586099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.586134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.586282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.586317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.586577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.586612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.586776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.586812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.586998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.587039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.587343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.587378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.587531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.587565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.587709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.426 [2024-11-20 19:04:28.587743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.426 qpair failed and we were unable to recover it. 00:27:06.426 [2024-11-20 19:04:28.587876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.587912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.588145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.588180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.588393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.588430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.588657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.588692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.588993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.589028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.589312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.589348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.589477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.589512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.589671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.589706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.589934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.589969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.590254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.590290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.590500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.590535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.590688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.590723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.590951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.590985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.591250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.591286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.591487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.591522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.591833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.591868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.592084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.592119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.592335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.592371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.592526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.592561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.592701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.592735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.592946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.592980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.593181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.593230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.593435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.593470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.593676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.593716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.594005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.594041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.594267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.594302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.594513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.594548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.594890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.594925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.595146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.595181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.595373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.595408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.595614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.595648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.595970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.596005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.596308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.596344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.596496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.596530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.596719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.427 [2024-11-20 19:04:28.596755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.427 qpair failed and we were unable to recover it. 00:27:06.427 [2024-11-20 19:04:28.596986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.597020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.597280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.597317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.597469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.597504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.597718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.597753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.598062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.598097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.598395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.598433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.598696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.598731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.598996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.599031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.599298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.599335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.599485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.599519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.599674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.599708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.599986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.600021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.600228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.600265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.600473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.600508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.600718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.600752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.601068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.601104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.601354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.601391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.601593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.601628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.601842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.601877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.602031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.602068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.602216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.602253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.602513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.602548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.602754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.602788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.603070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.603106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.603389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.603426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.603638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.603674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.603814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.603849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.604132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.604167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.604441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.604482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.604645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.604679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.604834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.604868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.605056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.605091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.605284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.605320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.428 [2024-11-20 19:04:28.605512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.428 [2024-11-20 19:04:28.605547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.428 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.605818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.605852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.605994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.606028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.606373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.606410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.606620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.606654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.606870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.606906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.607116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.607151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.607386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.607422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.607578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.607613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.607825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.607858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.608135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.608168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.608458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.608494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.608702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.608738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.609034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.609069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.609294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.609332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.609593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.609628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.609873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.609908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.610218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.610255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.610463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.610497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.610709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.610745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.610879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.610914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.611111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.611146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.611370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.611406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.611628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.611663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.611987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.612021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.612331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.612367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.612513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.612549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.612689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.612723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.613026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.613061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.613345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.613382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.613614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.613650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.613911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.613946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.614141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.614175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.614453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.614488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.614695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.614730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.615020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.615061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.615328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.615365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.615518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.429 [2024-11-20 19:04:28.615553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.429 qpair failed and we were unable to recover it. 00:27:06.429 [2024-11-20 19:04:28.615695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.615728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.615957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.615992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.616253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.616289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.616520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.616554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.616679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.616714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.616910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.616944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.617143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.617177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.617451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.617487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.617711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.617745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.618034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.618069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.618334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.618370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.618582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.618617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.618904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.618939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.619152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.619187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.619337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.619372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.619585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.619620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.619775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.619810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.620087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.620122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.620313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.620349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.620560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.620595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.620803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.620837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.621037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.621071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.621217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.621251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.621452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.621487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.621643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.621678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.621809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.621844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.622124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.622157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.622324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.622359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.622567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.622603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.622848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.622883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.623033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.623067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.623333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.623370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.623514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.623548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.623750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.430 [2024-11-20 19:04:28.623784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.430 qpair failed and we were unable to recover it. 00:27:06.430 [2024-11-20 19:04:28.624000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.624034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.624176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.624225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.624467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.624501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.624708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.624748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.624986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.625021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.625329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.625365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.625625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.625660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.625874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.625908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.626107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.626142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.626359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.626394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.626535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.626570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.626849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.626884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.627086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.627122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.627279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.627315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.627474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.627509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.627731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.627765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.628042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.628076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.628300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.628337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.628610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.628644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.628928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.628963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.629108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.629143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.629352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.629389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.629591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.629627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.629957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.629991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.630272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.630308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.630472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.630508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.630741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.630777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.630919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.630952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.631264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.631301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.631459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.631494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.631714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.631750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.631955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.631990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.632175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.632221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.632348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.632385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.632526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.632561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.632823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.632858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.431 qpair failed and we were unable to recover it. 00:27:06.431 [2024-11-20 19:04:28.633059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.431 [2024-11-20 19:04:28.633095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.633304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.633340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.633556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.633591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.633749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.633783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.633997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.634032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.634246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.634285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.634426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.634461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.634619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.634661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.634938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.634974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.635221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.635258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.635406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.635440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.635587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.635623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.635898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.635933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.636067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.636103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.636313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.636350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.636493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.636529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.636733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.636769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.636971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.637008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.637269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.637308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.637566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.637600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.637834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.637868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.638114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.638150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.638362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.638398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.638572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.638607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.638812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.638847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.639129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.639164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.639501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.639538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.639680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.639716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.639942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.639977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.640260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.640298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.640505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.640539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.640694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.640729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.640929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.640965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.641106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.641140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.641447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.641485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.641645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.641679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.641899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.641935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.642143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.642178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.432 qpair failed and we were unable to recover it. 00:27:06.432 [2024-11-20 19:04:28.642437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.432 [2024-11-20 19:04:28.642474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.642626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.642661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.642869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.642905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.643109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.643144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.643426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.643462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.643670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.643705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.643837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.643874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.644085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.644119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.644363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.644401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.644658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.644699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.644852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.644888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.645147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.645183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.645478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.645514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.645649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.645686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.645922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.645957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.646177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.646233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.646386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.646421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.646612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.646648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.646787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.646823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.647149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.647186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.647397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.647432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.647664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.647699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.647915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.647951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.648235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.648272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.648484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.648519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.648806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.648841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.649029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.649066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.649264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.649301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.649508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.649545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.649684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.649719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.649930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.649965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.650245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.650281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.650544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.650579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.650767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.650801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.650990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.651024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.651167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.651214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.651417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.651454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.651656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.433 [2024-11-20 19:04:28.651690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.433 qpair failed and we were unable to recover it. 00:27:06.433 [2024-11-20 19:04:28.651827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.651862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.652062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.652097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.652250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.652287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.652428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.652663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.652697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.652839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.652876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.653017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.653051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.653186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.653232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.653374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.653409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.653667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.653703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.653833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.653869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.653983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.654025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.654235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.654274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.654476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.654512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.654660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.654695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.654819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.654855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.654992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.655027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.655236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.655274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.655489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.655526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.655712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.655747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.655950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.655986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.656142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.656179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.656458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.656494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.656618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.656654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.656794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.656829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.656964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.657002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.657226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.657265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.657464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.657498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.657640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.657675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.657805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.657841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.657982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.658018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.658233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.658269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.658462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.658498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.658691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.658726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.658850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.658885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.659021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.659056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.659188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.659437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.434 [2024-11-20 19:04:28.659472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.434 qpair failed and we were unable to recover it. 00:27:06.434 [2024-11-20 19:04:28.659611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.659646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.659829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.659864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.660060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.660096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.660243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.660279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.660412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.660448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.660709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.660744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.660983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.661019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.661139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.661175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.661321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.661359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.661492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.661527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.661641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.661675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.661788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.661824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.662083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.662118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.662257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.662299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.662495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.662531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.662670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.662706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.662913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.662949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.663135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.663170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.663312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.663349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.663487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.663522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.663732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.663767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.663982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.664017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.664133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.664170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.664397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.664432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.664546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.664583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.664810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.664844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.664956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.664993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.665215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.665253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.665393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.665429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.665652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.665686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.665815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.665850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.666044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.666080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.666196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.435 [2024-11-20 19:04:28.666258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.435 qpair failed and we were unable to recover it. 00:27:06.435 [2024-11-20 19:04:28.666483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.666519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.666648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.666684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.666800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.666839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.666974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.667010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.667143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.667179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.667336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.667372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.667505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.667539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.667745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.667782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.667901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.667937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.668150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.668186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.668336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.668371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.668484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.668519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.668644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.668679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.668818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.668853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.669059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.669094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.669284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.669320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.669457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.669491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.669631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.669666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.669799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.669834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.669965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.669999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.670133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.670174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.670356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.670395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.670509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.670543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.670679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.670714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.670838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.670871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.671129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.671170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.671439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.671475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.671686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.671720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.671854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.671889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.672053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.672087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.672210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.672246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.672359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.672393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.672514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.672547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.672663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.672697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.672957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.672991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.673113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.673147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.673393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.673429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.436 [2024-11-20 19:04:28.673565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.436 [2024-11-20 19:04:28.673599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.436 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.673786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.673821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.673948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.673981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.674106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.674140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.674294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.674329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.674453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.674487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.674695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.674731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.674946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.674981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.675123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.675157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.675290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.675327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.675448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.675484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.675665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.675699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.675822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.675857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.675977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.676011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.676127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.676161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.676366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.676401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.676583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.676834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.676868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.677065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.677099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.677242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.677278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.677401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.677436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.677622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.677658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.677792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.677825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.677990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.678032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.678165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.678199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.678352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.678389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.678520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.678552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.678735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.678770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.678898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.678929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.679063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.679098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.679250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.679287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.679496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.679533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.679659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.679694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.679814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.679847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.679972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.680007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.680118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.680152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.680288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.680324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.680517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.680552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.437 qpair failed and we were unable to recover it. 00:27:06.437 [2024-11-20 19:04:28.680764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.437 [2024-11-20 19:04:28.680798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.680914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.680948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.681066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.681100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.681234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.681271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.681475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.681510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.681637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.681672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.681878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.681912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.682130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.682165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.682295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.682331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.682535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.682571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.682770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.682804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.682928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.682963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.683136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.683228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.683448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.683487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.683619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.683653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.683790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.683824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.683951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.683986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.684171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.684217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.684409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.684445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.684562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.684596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.684851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.684886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.685009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.685043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.685227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.685262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.685446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.685480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.685586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.685620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.685752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.685795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.685920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.685954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.686087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.686122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.686279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.686315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.686498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.686531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.686652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.686686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.686939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.686973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.687095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.687129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.687331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.687367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.687575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.687610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.687862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.687895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.688077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.688113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.688252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.688288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.438 qpair failed and we were unable to recover it. 00:27:06.438 [2024-11-20 19:04:28.688412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.438 [2024-11-20 19:04:28.688445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.688634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.688668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.688802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.688838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.688965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.689000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.689140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.689173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.689379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b78af0 is same with the state(6) to be set 00:27:06.439 [2024-11-20 19:04:28.689548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.689587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.689733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.689767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.689883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.689916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.690171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.690216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.690328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.690362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.690563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.690598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.690717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.690751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.690869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.690903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.691050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.691084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.691315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.691351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.691471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.691505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.691631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.691666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.691877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.691913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.692187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.692243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.692383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.692419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.692648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.692683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.692920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.692954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.693234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.693272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.693547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.693582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.693719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.693754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.694056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.694091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.694294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.694331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.694527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.694562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.694789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.694822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.695110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.695146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.695313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.695348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.695550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.695586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.695843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.695878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.696134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.696167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.696417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.696453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.696606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.696640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.696874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.696911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.439 [2024-11-20 19:04:28.697159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.439 [2024-11-20 19:04:28.697193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.439 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.697480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.697516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.697731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.697765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.698065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.698106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.698365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.698402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.698557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.698592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.698839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.698873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.699025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.699059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.699263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.699299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.699495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.699530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.699675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.699710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.699905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.699940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.700164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.700199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.700395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.700431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.700632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.700668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.700802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.700838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.701062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.701097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.701335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.701371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.701509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.701544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.701739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.701775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.701994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.702029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.702158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.702193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.703757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.703820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.704032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.704069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.704309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.704348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.704667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.704703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.705006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.705042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.705229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.705266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.705467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.705501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.705659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.705693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.705932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.705966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.706195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.706244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.706425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.706460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.706662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.440 [2024-11-20 19:04:28.706696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.440 qpair failed and we were unable to recover it. 00:27:06.440 [2024-11-20 19:04:28.706825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.706859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.707058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.707092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.707298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.707333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.707538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.707572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.707707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.707742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.707933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.707968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.708103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.708137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.708436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.708473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.708680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.708713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.709070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.709110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.709341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.709378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.709584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.709621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.709824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.709860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.710114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.710148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.710439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.710475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.710677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.710711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.710947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.710982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.711184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.711228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.711436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.711472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.441 [2024-11-20 19:04:28.711679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.441 [2024-11-20 19:04:28.711713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.441 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.712066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.712101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.712338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.712375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.712591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.712625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.712914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.712949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.713228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.713264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.713488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.713524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.713725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.713760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.714033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.714068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.714308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.714468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.714503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.714782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.714817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.715046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.715080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.715354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.715391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.715533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.715567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.715774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.715809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.716007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.716042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.716272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.719 [2024-11-20 19:04:28.716307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.719 qpair failed and we were unable to recover it. 00:27:06.719 [2024-11-20 19:04:28.716513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.716548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.716758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.716793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.717001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.717035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.717337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.717374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.717526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.717561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.717774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.717810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.718017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.718051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.718358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.718395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.718530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.718565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.718770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.718806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.718940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.718975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.719168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.719213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.719353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.719393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.719633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.719667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.719898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.719933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.720086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.720121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.720343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.720379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.720532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.720566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.720769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.720805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.721072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.721106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.721247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.721284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.721429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.721463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.721704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.721738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.722047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.722082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.722353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.722389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.722524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.722558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.722724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.722758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.723066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.723101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.723308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.723344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.723531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.723567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.723726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.723761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.724064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.724098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.724360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.724396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.724541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.724576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.724731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.724766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.724984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.725020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.725277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.725314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.720 [2024-11-20 19:04:28.725481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.720 [2024-11-20 19:04:28.725516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.720 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.725643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.725959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.725998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.726288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.726324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.726538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.726576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.726704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.726743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.726946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.726982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.727281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.727319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.727459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.727493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.727634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.727670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.727942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.727976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.728194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.728239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.728444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.728479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.728686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.728721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.729001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.729036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.729251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.729295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.729562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.729597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.729755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.729791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.730053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.730089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.730378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.730415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.730708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.730742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.730962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.730996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.731215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.731251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.731474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.731510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.731700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.731735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.731960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.731994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.732212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.732249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.732457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.732492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.732676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.732711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.732841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.732876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.733139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.733174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.733439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.733474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.733706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.733740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.733985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.734020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.734174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.734222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.734380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.734415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.734567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.734603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.721 [2024-11-20 19:04:28.734865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.721 [2024-11-20 19:04:28.734900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.721 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.735111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.735146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.735371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.735408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.735545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.735579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.735887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.735921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.736189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.736237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.736467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.736502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.736761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.736795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.737101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.737134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.737339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.737376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.737518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.737554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.737759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.737794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.738019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.738053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.738315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.738352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.738558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.738593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.738897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.738933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.739195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.739241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.739439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.739474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.739756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.739798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.740056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.740092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.740391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.740427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.740651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.740686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.740911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.740946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.741255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.741292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.741502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.741537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.741690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.741726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.741986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.742021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.742340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.742377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.742636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.742671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.742960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.742996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.743273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.743310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.743513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.743547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.743814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.743849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.744035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.744070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.744309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.744346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.744647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.744683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.744821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.744856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.745112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.745148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.722 qpair failed and we were unable to recover it. 00:27:06.722 [2024-11-20 19:04:28.745455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.722 [2024-11-20 19:04:28.745492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.745634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.745669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.745874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.745909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.746045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.746080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.746286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.746322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.746477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.746512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.746653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.746688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.746956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.747036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.747299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.747339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.747497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.747534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.747722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.747756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.748090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.748126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.748415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.748452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.748712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.748748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.748964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.749000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.749276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.749313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.749522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.749556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.749798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.749832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.750115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.750150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.750372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.750409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.750615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.750666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.750877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.750912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.751188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.751235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.751545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.751580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.751874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.751909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.752179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.752225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.752444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.752479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.752685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.752720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.753035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.753070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.753293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.753330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.753606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.753640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.753803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.753839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.754097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.754134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.754382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.754418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.754638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.754674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.754926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.754961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.755211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.755247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.755455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.755489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.723 qpair failed and we were unable to recover it. 00:27:06.723 [2024-11-20 19:04:28.755642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.723 [2024-11-20 19:04:28.755676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.755877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.755912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.756095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.756130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.756362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.756399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.756607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.756643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.756843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.756878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.757084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.757120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.757376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.757413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.757570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.757604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.757855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.757935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.758230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.758270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.758554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.758590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.758749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.758784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.759044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.759078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.759315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.759351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.759490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.759524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.759733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.759768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.759890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.759925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.760182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.760227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.760393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.760430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.760631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.760666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.760819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.760854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.761035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.761080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.761366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.761402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.761559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.761593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.761730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.761765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.762052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.762088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.762286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.762322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.762479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.762514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.762738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.762773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.762986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.763020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.763226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.724 [2024-11-20 19:04:28.763263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.724 qpair failed and we were unable to recover it. 00:27:06.724 [2024-11-20 19:04:28.763407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.763442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.763574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.763609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.763756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.763791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.764013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.764048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.764175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.764220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.764480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.764515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.764744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.764779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.765121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.765156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.765314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.765349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.765607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.765642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.765942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.765977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.766266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.766303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.766492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.766526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.766732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.766767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.767062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.767097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.767305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.767342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.767603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.767638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.767866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.767901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.768182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.768224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.768414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.768448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.768661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.768696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.768986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.769020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.769167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.769210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.769400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.769436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.769644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.769679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.769907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.769941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.770152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.770188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.770435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.770471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.770680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.770714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.771005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.771041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.771273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.771315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.771445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.771480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.771639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.771674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.771974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.772008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.772230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.772270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.772461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.772496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.772654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.772690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.725 qpair failed and we were unable to recover it. 00:27:06.725 [2024-11-20 19:04:28.772889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.725 [2024-11-20 19:04:28.772924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.773251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.773288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.773517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.773551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.773808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.773843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.774101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.774135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.774473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.774509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.774645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.774680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.774904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.774939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.775138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.775173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.775373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.775408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.775629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.775665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.775897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.775930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.776218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.776254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.776473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.776508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.776700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.776734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.776926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.776960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.777246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.777282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.777515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.777550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.777741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.777775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.778043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.778077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.778289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.778327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.778471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.778505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.778762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.778798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.778956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.778992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.779311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.779347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.779637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.779671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.779928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.779963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.780178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.780237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.780472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.780506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.780787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.780823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.781029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.781064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.781263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.781300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.781452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.781486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.781689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.781724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.781950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.781985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.782241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.782277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.782541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.782577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.782716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.726 [2024-11-20 19:04:28.782752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.726 qpair failed and we were unable to recover it. 00:27:06.726 [2024-11-20 19:04:28.783041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.783077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.783247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.783284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.783495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.783531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.783729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.783764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.783968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.784003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.784215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.784251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.784492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.784528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.784712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.784748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.785046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.785081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.785364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.785401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.785558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.785593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.785787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.785821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.786043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.786078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.786307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.786344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.786492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.786527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.786668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.786702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.787050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.787085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.787300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.787337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.787534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.787569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.787771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.787805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.788005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.788039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.788367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.788403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.788656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.788697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.788928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.788962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.789147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.789182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.789346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.789383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.789517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.789550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.789691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.789724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.790023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.790057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.790269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.790306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.790434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.790469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.790677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.790713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.791067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.791103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.791251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.791287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.791496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.791531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.791662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.791696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.791994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.792029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.727 [2024-11-20 19:04:28.792232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.727 [2024-11-20 19:04:28.792268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.727 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.792473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.792507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.792713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.792749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.793072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.793108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.793345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.793381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.793588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.793623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.793835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.793870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.794057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.794092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.794357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.794393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.794601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.794636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.794852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.794888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.795193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.795242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.795412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.795447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.795703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.795737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.796045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.796080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.796350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.796388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.796679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.796714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.796978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.797012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.797287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.797323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.797483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.797519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.797705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.797740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.797970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.798004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.798260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.798490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.798524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.798781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.798816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.799015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.799061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.799347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.799385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.799578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.799614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.799770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.799805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.800028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.800063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.800217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.800254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.800389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.800425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.800613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.800649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.728 [2024-11-20 19:04:28.800886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.728 [2024-11-20 19:04:28.800920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.728 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.801199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.801248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.801455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.801490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.801718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.801752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.801896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.801930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.802147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.802182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.802349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.802386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.802541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.802576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.802709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.802744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.803053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.803087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.803316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.803353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.803505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.803539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.803798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.803833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.804112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.804146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.804350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.804387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.804668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.804703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.804893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.804928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.805258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.805295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.805431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.805465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.805678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.805713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.805936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.805970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.806174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.806221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.806452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.806488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.806630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.806665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.806934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.806969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.807153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.807189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.807336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.807372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.807568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.807602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.807810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.807845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.808066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.808102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.808365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.808401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.808542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.808577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.808728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.808770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.809046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.809081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.809315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.809352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.809505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.809540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.809744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.809780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.810010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.729 [2024-11-20 19:04:28.810045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.729 qpair failed and we were unable to recover it. 00:27:06.729 [2024-11-20 19:04:28.810274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.810311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.810533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.810567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.810876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.810911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.811172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.811217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.811419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.811455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.811663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.811698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.811930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.811965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.812105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.812140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.812307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.812344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.812564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.812597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.812749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.812784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.813031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.813064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.813269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.813305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.813432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.813466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.813665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.813701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.813907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.813942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.814160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.814196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.814426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.814461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.814653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.814688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.814910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.814944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.815215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.815251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.815408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.815443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.815729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.815765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.815964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.815999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.816237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.816274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.816414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.816449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.816675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.816710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.816844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.816880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.817156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.817191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.817354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.817389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.817549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.817583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.817725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.817760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.818042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.818078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.818232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.818269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.818479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.818521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.818677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.818711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.818909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.818945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.819146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.730 [2024-11-20 19:04:28.819181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.730 qpair failed and we were unable to recover it. 00:27:06.730 [2024-11-20 19:04:28.819401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.819437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.819569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.819605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.819741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.819776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.819978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.820012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.820240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.820275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.820409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.820444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.820641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.820677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.820952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.820988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.821193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.821240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.821401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.821436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.821597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.821632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.821750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.821786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.821994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.822029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.822226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.822263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.822401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.822437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.822691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.822725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.823009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.823044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.823334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.823370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.823571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.823607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.823873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.823908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.824174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.824218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.824371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.824405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.824546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.824579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.824700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.824735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.825037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.825072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.825263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.825299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.825507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.825543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.825773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.825807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.825961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.825997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.826257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.826293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.826431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.826466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.826677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.826712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.826969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.827003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.827218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.827254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.827466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.827501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.827734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.827769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.828050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.828090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.731 [2024-11-20 19:04:28.828375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.731 [2024-11-20 19:04:28.828411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.731 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.828552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.828588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.828791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.828826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.829056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.829092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.829347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.829384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.829587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.829622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.829936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.829972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.830194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.830240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.830497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.830532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.830743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.830779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.830991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.831027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.831218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.831253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.831462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.831497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.831712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.831748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.832076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.832112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.832393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.832430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.832568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.832603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.832873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.832906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.833139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.833174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.833413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.833448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.833658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.833693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.833989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.834024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.834251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.834286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.834418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.834453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.834590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.834624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.834851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.834886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.835092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.835126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.835330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.835366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.835503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.835537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.835790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.835824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.836042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.836076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.836339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.836376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.836634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.836668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.836919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.836954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.837141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.837175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.837443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.837480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.837618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.837652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.837856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.837890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.732 [2024-11-20 19:04:28.838112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.732 [2024-11-20 19:04:28.838148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.732 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.838378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.838420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.838615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.838650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.838858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.838893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.839097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.839132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.839345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.839382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.839588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.839622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.839770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.839807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.840014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.840049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.840185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.840233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.840374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.840409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.840665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.840699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.840906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.840941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.841236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.841272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.841489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.841523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.841740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.841776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.842083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.842117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.842345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.842382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.842573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.842607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.842744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.842779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.843077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.843111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.843305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.843341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.843539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.843574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.843776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.843811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.844021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.844055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.844326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.844364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.844584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.844618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.844870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.844904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.845118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.845152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.845430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.845467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.845680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.845715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.845912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.845948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.846142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.846176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.733 [2024-11-20 19:04:28.846427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.733 [2024-11-20 19:04:28.846463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.733 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.846721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.846756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.846989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.847024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.847227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.847263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.847468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.847502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.847781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.847816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.848097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.848133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.848286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.848322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.848606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.848653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.848942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.848977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.849243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.849279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.849413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.849449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.849606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.849640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.849777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.849811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.850091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.850126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.850348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.850384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.850607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.850642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.850840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.850875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.851154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.851189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.851418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.851454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.851589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.851623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.851821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.851856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.852117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.852153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.852376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.852412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.852602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.852637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.852832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.852866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.853101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.853136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.853285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.853321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.853526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.853561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.853820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.853854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.854125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.854161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.854455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.854490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.854632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.854667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.854967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.855001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.855142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.855177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.855456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.855492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.855753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.855789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.855988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.856022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.734 qpair failed and we were unable to recover it. 00:27:06.734 [2024-11-20 19:04:28.856233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.734 [2024-11-20 19:04:28.856271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.856480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.856514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.856751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.856785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.857044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.857078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.857266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.857303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.857512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.857547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.857735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.857770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.858057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.858091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.858402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.858438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.858647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.858682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.859019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.859061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.859276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.859313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.859465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.859501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.859713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.859749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.859972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.860008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.860235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.860271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.860459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.860492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.860638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.860673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.860808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.860842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.861124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.861160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.861373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.861410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.861558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.861594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.861786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.861820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.862017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.862053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.862281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.862319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.862623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.862658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.862936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.862970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.863178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.863223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.863432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.863466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.863674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.863708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.863955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.863990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.864259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.864295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.864578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.864614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.864747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.864783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.864913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.864948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.865086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.865121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.865400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.865437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.865699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.735 [2024-11-20 19:04:28.865780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.735 qpair failed and we were unable to recover it. 00:27:06.735 [2024-11-20 19:04:28.866077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.866118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.866317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.866355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.866563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.866599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.866808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.866843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.867058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.867093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.867309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.867348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.867511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.867546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.867694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.867729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.868072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.868211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.868246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.868410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.868445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.868599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.868634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.868850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.868884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.869084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.869119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.869339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.869376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.869581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.869615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.869745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.869781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.869917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.869952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.870105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.870141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.870376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.870412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.870620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.870655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.870963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.870999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.871153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.871187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.871363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.871398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.871606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.871641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.871903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.871938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.872068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.872110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.872319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.872356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.872513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.872548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.872677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.872711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.873010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.873044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.873319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.873355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.873549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.873583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.873737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.873771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.874086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.874122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.874414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.874450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.874692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.874727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.874940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.874976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.736 qpair failed and we were unable to recover it. 00:27:06.736 [2024-11-20 19:04:28.875176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.736 [2024-11-20 19:04:28.875240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.875393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.875427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.875641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.875676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.875917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.875952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.876216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.876253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.876392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.876427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.876590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.876625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.876777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.876813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.877094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.877128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.877381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.877416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.877602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.877637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.877789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.877825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.878106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.878140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.878359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.878394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.878538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.878573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.878702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.878742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.879013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.879047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.879163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.879196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.879399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.879436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.879597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.879631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.879822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.879858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.879985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.880019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.880231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.880267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.880477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.880513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.880720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.880755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.880889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.880924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.881227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.881264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.881472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.881509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.881743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.881778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.882079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.882118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.882385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.882422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.882566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.882601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.882812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.882847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.737 [2024-11-20 19:04:28.883034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.737 [2024-11-20 19:04:28.883068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.737 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.883238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.883275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.883421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.883456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.883595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.883630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.883840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.883875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.884104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.884138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.884347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.884384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.884578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.884612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.884814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.884849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.885041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.885089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.885230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.885268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.885510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.885545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.885671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.885707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.885865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.885900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.886141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.886178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.886410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.886446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.886591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.886627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.886790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.886825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.887086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.887120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.887327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.887365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.887517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.887552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.887763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.887797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.888047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.888082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.888320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.888357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.888615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.888649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.888930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.888965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.889174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.889217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.889403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.889439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.889600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.889635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.889791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.889827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.889973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.890008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.890234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.890270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.890459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.890494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.890625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.890661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.890882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.890916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.891196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.891387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.891421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.891586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.738 [2024-11-20 19:04:28.891622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.738 qpair failed and we were unable to recover it. 00:27:06.738 [2024-11-20 19:04:28.891925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.891960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.892105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.892139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.892344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.892380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.892587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.892622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.892782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.892817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.893000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.893033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.893253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.893290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.893488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.893522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.893728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.893764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.894040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.894075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.894283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.894320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.894471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.894506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.894757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.894834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.895080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.895121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.895265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.895306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.895519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.895553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.895728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.895763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.896058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.896095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.896324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.896364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.896576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.896612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.896758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.896794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.897014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.897053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.897330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.897369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.897524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.897559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.897764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.897802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.897990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.898035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.898159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.898195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.898444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.898483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.898632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.898666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.898945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.898980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.899189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.899235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.899371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.899407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.899550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.899586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.899722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.899757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.900069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.900105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.900305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.900340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.900461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.900497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.900687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.739 [2024-11-20 19:04:28.900722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.739 qpair failed and we were unable to recover it. 00:27:06.739 [2024-11-20 19:04:28.900955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.900991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.901282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.901319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.901457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.901492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.901700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.901735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.901955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.901991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.902223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.902261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.902519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.902553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.902763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.902799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.903087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.903123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.903333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.903369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.903565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.903599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.903876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.903909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.904184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.904226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.904429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.904464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.904709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.904789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.905147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.905185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.905420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.905458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.905615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.905651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.905808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.905843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.906118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.906153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.906314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.906350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.906493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.906528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.906730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.906765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.907084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.907119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.907362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.907399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.907617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.907652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.907916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.907953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.908243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.908290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.908453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.908487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.908696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.908732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.908931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.908965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.909153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.909188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.909413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.909448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.909590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.909625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.909875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.909910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.910107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.910142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.910368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.910404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.740 [2024-11-20 19:04:28.910605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.740 [2024-11-20 19:04:28.910641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.740 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.910837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.910876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.911075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.911109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.911337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.911374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.911567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.911603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.911760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.911795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.912004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.912038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.912295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.912332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.912544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.912579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.912764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.912799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.913002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.913037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.913341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.913377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.913524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.913558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.913776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.913810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.914005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.914040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.914176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.914222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.914380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.914414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.914558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.914599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.914740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.914774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.914978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.915014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.915218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.915255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.915455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.915489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.915647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.915682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.915907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.915942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.916141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.916177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.916444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.916480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.916639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.916672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.916874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.916910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.917225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.917260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.917422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.917456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.917596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.917631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.917825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.917860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.918061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.918097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.918305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.918343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.918502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.918538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.918745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.918780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.919036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.741 [2024-11-20 19:04:28.919072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-20 19:04:28.919309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.919346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.919554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.919589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.919800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.919836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.920025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.920060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.920335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.920373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.920576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.920611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.920822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.920858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.921059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.921095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.921256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.921293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.921548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.921583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.921731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.921766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.921964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.921999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.922297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.922334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.922543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.922576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.922725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.922760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.922944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.922979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.923236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.923273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.923531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.923566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.923765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.923800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.924084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.924118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.924398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.924439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.924598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.924632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.924912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.924950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.925164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.925198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.925434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.925471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.925660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.925696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.925902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.925938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.926092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.926128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.926289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.926325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.926535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.926570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.926838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.742 [2024-11-20 19:04:28.926873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-20 19:04:28.927061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.927097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.927369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.927405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.927603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.927638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.927901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.927937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.928217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.928253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.928536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.928573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.928716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.928751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.928963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.928997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.929198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.929243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.929450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.929485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.929765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.929800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.930030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.930065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.930392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.930429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.930654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.930690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.930923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.930957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.931243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.931280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.931495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.931531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.931740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.931774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.932060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.932095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.932413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.932450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.932585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.932620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.932820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.932856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.933151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.933185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.933455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.933491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.933703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.933737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.934045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.934081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.934292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.934328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.934605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.934640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.934838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.934874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.935104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.935144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.935355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.935392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.935553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.935587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.935798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.935834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.936142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.936178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.936408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.936444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.936654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.936690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.936928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.936962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.937269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.743 [2024-11-20 19:04:28.937305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-20 19:04:28.937503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.937539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.937706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.937741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.938031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.938065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.938263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.938300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.938506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.938541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.938812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.938848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.939128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.939165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.939447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.939483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.939708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.939742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.940020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.940056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.940335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.940371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.940656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.940690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.940966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.941002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.941294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.941331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.941486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.941521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.941723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.941758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.942036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.942071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.942311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.942347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.942613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.942648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.942903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.942938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.943221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.943257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.943462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.943498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.943636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.943670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.943967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.944002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.944194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.944253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.944464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.944499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.944693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.944727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.945025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.945061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.945353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.945390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.945647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.945682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.945983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.946018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.946286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.946330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.946610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.946645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.946940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.946976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.947167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.947209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.947479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.947515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.947703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.947738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.744 qpair failed and we were unable to recover it. 00:27:06.744 [2024-11-20 19:04:28.948010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.744 [2024-11-20 19:04:28.948045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.948240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.948277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.948509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.948543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.948753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.948788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.948979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.949013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.949221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.949257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.949446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.949482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.949682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.949717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.949930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.949966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.950246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.950283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.950510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.950544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.950766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.950800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.951041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.951075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.951286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.951322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.951604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.951639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.951837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.951872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.952177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.952231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.952471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.952506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.952700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.952735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.953058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.953094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.953375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.953411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.953690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.953726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.953917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.953952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.954226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.954261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.954536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.954571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.954853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.954888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.955094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.955128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.955391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.955428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.955657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.955692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.955900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.955934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.956080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.956115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.956439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.956476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.956611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.956645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.956906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.956940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.957129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.957170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.957449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.957484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.957689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.957724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.957930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.745 [2024-11-20 19:04:28.957966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.745 qpair failed and we were unable to recover it. 00:27:06.745 [2024-11-20 19:04:28.958224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.958260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.958445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.958480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.958673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.958709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.958960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.958994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.959200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.959244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.959446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.959481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.959788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.959823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.960018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.960053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.960333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.960369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.960596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.960632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.960831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.960867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.961094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.961128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.961257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.961294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.961566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.961600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.961726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.961760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.962045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.962080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.962282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.962318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.962602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.962636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.962792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.962827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.963051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.963086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.963374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.963410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.963568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.963604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.963847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.964133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.964170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.964395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.964431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.964570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.964605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.964877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.964912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.965195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.965245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.965381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.965416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.965674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.965708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.965990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.966025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.966258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.966295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.966607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.966890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.966926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.967129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.967164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.967462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.967497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.967706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.967747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.746 [2024-11-20 19:04:28.967938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.746 [2024-11-20 19:04:28.967974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.746 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.968181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.968225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.968504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.968539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.968757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.968791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.969049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.969085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.969317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.969353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.969540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.969576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.969856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.969891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.970040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.970076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.970357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.970392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.970693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.970727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.970988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.971024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.971301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.971337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.971491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.971527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.971783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.971819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.972103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.972138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.972352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.972388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.972581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.972615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.972868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.972903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.973187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.973233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.973506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.973541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.973817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.973852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.974139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.974175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.974453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.974489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.974703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.974738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.974924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.974959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.975274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.975312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.975595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.975631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.975820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.975855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.976050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.976085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.976237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.976274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.976476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.976512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.976771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.976806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.977110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.747 [2024-11-20 19:04:28.977145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.747 qpair failed and we were unable to recover it. 00:27:06.747 [2024-11-20 19:04:28.977452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.977489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.977770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.977805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.978065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.978100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.978404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.978441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.978740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.978775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.979007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.979047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.979262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.979299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.979583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.979618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.979868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.979903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.980050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.980085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.980361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.980398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.980534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.980569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.980769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.980804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.981104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.981139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.981359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.981394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.981681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.981716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.981898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.981932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.982140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.982174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.982462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.982498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.982779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.982815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.983094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.983131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.983413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.983449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.983708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.983744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.984043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.984079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.984290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.984327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.984541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.984577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.984859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.984895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.985173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.985215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.985512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.985547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.985803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.985839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.985989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.986024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.986221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.986258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.986480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.986517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.986649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.986684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.986989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.987024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.987310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.987346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.987621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.987657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.748 [2024-11-20 19:04:28.987960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.748 [2024-11-20 19:04:28.987995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.748 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.988258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.988295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.988574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.988608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.988837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.988873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.989067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.989102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.989318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.989355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.989658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.989693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.989902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.989936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.990131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.990172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.990385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.990420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.990601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.990748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.990784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.991062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.991097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.991302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.991339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.991600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.991635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.991821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.991856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.992139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.992174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.992454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.992489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.992797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.992833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.993108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.993143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.993284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.993321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.993646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.993680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.993897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.993932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.994218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.994254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.994561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.994595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.994890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.994925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.995199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.995246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.995474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.995510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.995647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.995681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.995960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.995996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.996111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.996146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.996376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.996412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.996683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.996718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.997023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.997059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.997305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.997341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.997628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.997664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.997886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.997921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.998181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.749 [2024-11-20 19:04:28.998225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.749 qpair failed and we were unable to recover it. 00:27:06.749 [2024-11-20 19:04:28.998498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:28.998533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:28.998834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:28.998867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:28.999132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:28.999167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:28.999371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:28.999407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:28.999668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:28.999702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:28.999987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.000022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.000222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.000259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.000516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.000550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.000803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.000837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.001093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.001129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.001350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.001392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.001662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.001696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.001975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.002010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.002222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.002258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.002543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.002578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.002851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.002886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.003117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.003153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.003301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.003337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.003619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.003655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.003849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.003884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.004140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.004175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.004384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.004421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.004635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.004669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.004922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.005264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.005302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.005505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.005540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.005750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.005785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.006043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.006078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.006275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.006311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.006426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.006460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.006735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.006770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.006959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.006994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.007257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.007293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.007483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.007518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.007733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.007767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.008075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.008109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.008314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.008351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.008632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.750 [2024-11-20 19:04:29.008668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.750 qpair failed and we were unable to recover it. 00:27:06.750 [2024-11-20 19:04:29.008944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.008980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.009263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.009300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.009552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.009586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.009805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.009840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.010027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.010063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.010261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.010298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.010578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.010612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.010834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.010869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.011093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.011128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.011413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.011449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.011726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.011762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.012042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.012077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.012269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.012313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.012575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.012610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.012810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.012846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.013051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.013086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.013318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.013354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.013480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.013515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.013790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.013825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.014032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.014068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.014260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.014296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.014578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.014614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.014863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.015097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.015132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.015338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.015375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.015631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.015666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.015974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.016008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.016315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.016351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.016616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.016651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.016933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.016968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.017157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.017192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.017414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.017447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.017646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.751 [2024-11-20 19:04:29.017681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.751 qpair failed and we were unable to recover it. 00:27:06.751 [2024-11-20 19:04:29.017963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.017998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.018275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.018311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.018566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.018602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.018862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.018897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.019183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.019227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.019541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.019576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.019780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.019815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.020075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.020110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.020238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.020275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.020502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.020537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.020826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.020861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.021153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.021188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.021483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.021518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.021814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.021849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.022060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.022095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.022383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.022419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.022693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.022728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.023009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.023045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.023323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.023360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.023562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.023602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.023824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.023859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.024077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.024112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.024340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.024377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.024563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.024597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.024854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.024890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.025177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.025222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.025488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.025524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.025808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.025843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:06.752 qpair failed and we were unable to recover it. 00:27:06.752 [2024-11-20 19:04:29.026050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.752 [2024-11-20 19:04:29.026086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.026309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.026345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.026603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.026638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.026927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.026962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.027246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.027284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.027578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.027614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.027893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.027928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.028134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.028170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.028502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.028538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.028793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.028828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.029011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.029047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.029304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.029341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.029538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.029573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.029845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.029880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.030174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.030222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.030486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.030522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.030709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.030743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.031001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.031037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.031318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.031355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.031632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.031947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.031982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.032211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.032247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.032506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.032541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.032726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.032761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.032980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.033016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.033224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.033259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.033551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.033587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.033873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.033908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.034180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.034224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.034434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.034469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.034752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.034786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.035059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.035101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.035358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.035394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.035678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.035713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.035988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.036023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.036310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.036347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.036637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.036673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.036944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.036980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.037278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.037316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.037530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.037565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.037846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.037881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-11-20 19:04:29.038158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-11-20 19:04:29.038192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.038477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.038511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.038696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.038731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.038961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.038996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.039139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.039174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.039414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.039450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.039757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.039792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.039992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.040027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.040235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.040272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.040492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.040528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.040726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.040760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.041067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.041102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.041364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.041402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.041615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.041650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.041768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.041803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.042078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.042114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.042419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.042456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.042718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.043011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.043046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.043268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.043304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.043491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.043526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.043833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.043867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.044053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.044088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.044311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.044543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.044578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.044887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.044922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.045218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.045253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.045528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.045563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.045819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.045854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.046037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.046071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.046297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.046339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.046581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.046616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.046897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.046931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.047221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.047258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.047531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.047566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.047843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.047879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.048163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.048199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.048416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.048452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.048659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.048695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.048884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.048920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.049108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.049143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.049436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.049472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.049591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.049627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.049782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.049816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.050085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.050121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.050330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.050367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.050651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.050686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.050831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.050867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.051144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.051179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.051460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.051496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.051696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.051730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.051995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.052030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.052179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.052225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.052427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.052461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.052718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.052754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.053049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.053083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.053351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.053387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.053645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.053686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.053976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.054013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.054250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.054286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.054592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.054628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.054912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.054948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.055227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.055262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.055490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.055525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.055717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.055752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.055891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.055926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.056155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.056189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.056484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.056520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.056756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-11-20 19:04:29.056791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-11-20 19:04:29.056990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.057027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.057242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.057279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.057516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.057553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.057819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.057854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.058124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.058160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.058457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.058493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.058766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.058800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.059054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.059088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.059348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.059384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.059671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.059705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.059908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.060141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.060175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.060442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.060477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.060680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.060716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.060844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.060879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.061187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.061232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.061513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.061549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.061815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.061850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.062146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.062181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.062477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.062512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.062715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.062750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.063031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.063066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.063276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.063313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.063570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.063605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.063819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.063854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.064063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.064098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.064384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.064421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.064630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.064665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.064922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.064964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.065254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.065290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.065578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.065613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.065882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.065918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.066263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.066471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.066507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.066788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.066821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.067100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.067136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.067353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.067389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.067585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.067620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.067825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.067860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.068140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.068176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.068393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.068428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.068720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.068755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.069019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.069055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.069325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.069361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.069657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.069959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.069994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.070183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.070228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.070512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.070547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.070700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.070735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.071018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.071053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.071334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.071371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.071579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.071614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.071894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.071928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.072211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.072247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-11-20 19:04:29.072463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-11-20 19:04:29.072497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.072792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.072829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.073030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.073065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.073200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.073247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.073503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.073538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.073748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.073782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.073987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.074022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.074327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.074364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.074642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.074676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.074956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.074992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.075298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.075335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.075592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.075626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.075889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.075924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.076197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.076242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.076523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.076565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.076802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.076837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.076967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.077002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.077225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.077261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.077519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.077554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.077762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.077796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.078063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.078098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.078321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.078356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.078637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.078671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.078884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.078920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.079105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.079140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.079468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.079505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.079784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.079818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.080085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.080120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.080417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.080454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.080735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.080770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.080984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.081019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.081230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.081268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.081525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.081561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.081846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.081881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.082157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.082193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.082482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.082518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.082806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.082841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.083115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.083149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.083351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.083388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.083653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.083688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.083967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.084002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.084324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.084363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.084658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.084693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.084953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.084988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.085293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.085351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.085586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.085621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.085818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.085852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.086107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.086142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.086345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.086380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.086654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.086689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.086898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.086932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-11-20 19:04:29.087076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-11-20 19:04:29.087110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.087250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.087287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.087503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.087538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.087822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.087862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.088135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.088169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.088397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.088433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.088667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.088703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.088953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.088988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.089225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.089262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.089534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.089569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.089775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.089809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.090017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.090052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.090307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.090344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.090530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.090565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.090849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.090884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.091030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.091064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.091351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.091388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.091549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.091584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.091865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.091900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.092200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.092245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.092459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.092760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.092795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.093053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.093088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.093371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.093408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.093685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.093720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.093909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.093943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.094134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.094169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.094391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.094427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.094629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.094663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.094923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.094958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.095255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.095291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.095497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.095533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.095731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.095766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.096046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.096080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.096309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.096345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.096627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.096662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.096940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.096974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.097169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.097213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.097334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.097369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.097656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.097690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.097960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.097996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.098286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.098323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.098593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.098626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.098839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.098880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.099084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.099119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.099400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.099436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.099699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.099734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.100010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.100047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.100333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.100370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.100662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.100696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.100964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.101001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.101295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.101332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.101523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.101557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.101754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.101790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-11-20 19:04:29.101974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-11-20 19:04:29.102008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.102288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.102324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.102595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.102630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.102891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.102927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.103226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.103262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.103526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.103562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.103814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.103848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.104127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.104163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.104449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.104484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.104599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.104634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.104854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.104889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.105144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.105180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.105428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.105464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.105772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.105808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.106049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.106083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.106278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.106315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.106528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.106564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.106775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.106809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.107036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.107071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.107354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.107391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.107667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.107702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.107930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.107965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.108190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.108234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.108511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.108547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.108823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.108858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.109145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.109181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.109400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.109434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.109716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.109752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.109955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.109990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.110268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.110310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.110516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.110550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.110828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.110864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.111150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.111184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.111416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.111452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.111736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.111771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.112000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.112035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.112319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.112356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.112633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.112668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.112867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.112903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.113159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.113195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.113514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.113549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.113802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.113837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.114048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.114084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.114370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.114406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.114681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.114717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.115006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.115041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.115313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.115350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.115622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.115657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.115956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.115990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.116256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.116293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.116561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.116595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.116887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.116922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.117108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.117142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.117424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.117459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.117736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.117771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.117980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.118014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.118279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.118316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.118469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-11-20 19:04:29.118504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-11-20 19:04:29.118761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.118797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.119055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.119089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.119350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.119386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.119668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.119703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.120006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.120041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.120250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.120286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.120593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.120628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.120823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.120858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.121117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.121151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.121425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.121461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.121770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.121806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.121996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.122038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.122296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.122332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.122550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.122585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.122860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.122895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.123028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.123063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.123319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.123357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.123635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.123670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.123954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.123989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.124269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.124305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.124588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.124622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.124813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.124849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.125128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.125163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.125299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.125335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.125524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.125559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.125823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.125859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.126144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.126181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.126459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.126495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.126777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.126811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.127094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.127129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.127409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.127445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.127691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.127726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.127936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.127971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.128265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.128302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.128429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.128463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.128744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.128779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.128986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.129021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.129223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.129259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.129524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.129558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.129766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.129802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.130069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.130104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.130417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.130455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.130683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.130718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.130913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.130950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.131241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.131278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.131546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.131583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.131874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.131909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.132180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.132230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.132503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.132539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.132672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.132707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-11-20 19:04:29.132961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-11-20 19:04:29.132996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.133279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.133322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.133621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.133657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.133913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.133949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.134157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.134194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.134468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.134504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.134764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.134801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.135008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.135048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.135244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.135280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.135552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.135587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.135853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.135889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.136090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.136125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.136440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.136482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.136722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.136759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.137042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.137076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.137346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.137384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.137650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.137685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.137971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.138006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.138221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.138257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.138488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.138523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.138726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.138760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.138957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.138992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.139249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.139286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.139571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.139605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.139855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.139889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.140089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.140124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.140381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.140418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.140676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.140711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.141042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.141078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.141326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.141363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.141569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.141605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.141859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.141894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.142185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.142229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.142383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.142418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.142694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.142730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.143016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.143051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.143240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.143277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.143471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.143506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.143644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.143686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.143900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.143936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.144159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.144194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.144401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.144443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.144671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.144704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.144907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.144942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.145078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.145113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.145327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.145363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.145646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.145681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.145963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.145998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.146264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.146300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.146455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.146491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.146611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.146647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.146878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.146914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.147077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.147113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.147377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-11-20 19:04:29.147413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-11-20 19:04:29.147675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.147710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.147846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.147884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.148087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.148121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.148425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.148466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.148672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.148707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.148984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.149020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.149328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.149367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.149580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.149616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.149885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.149920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.150149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.150185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.150421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.150456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.150714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.150749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.151052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.151088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.151331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.151367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.151647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.151683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.151993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.152027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.152244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.152280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.152480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.152515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.152725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.152761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.152874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.152907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.153109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.153145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.153405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.153441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.153638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.153674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.153887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.153922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.154131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.154167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.154375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.154410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.154637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.154780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.154820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.154970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.155006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.155272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.155309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.155515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.155550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.155741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.155777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.155982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.156017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.156277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.156313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.156520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.156556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.156846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.156881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.157137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.157172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.157394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.157431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.157668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.157702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.157959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.157995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.158304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.158340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.158553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.158588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.158842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.158877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.159086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.159121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.159382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.159419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.159707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.159741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.159974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.160009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.160291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.160326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.160635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.160670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.160805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.160841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.161095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.161129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.161408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.161445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.161729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.161763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.162032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.162068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-11-20 19:04:29.162339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-11-20 19:04:29.162420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.162679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.162718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.163027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.163064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.163332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.163369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.163650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.163685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.163956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.163992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.164213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.164250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.164395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.164429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.164638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.164673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.164911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.164946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.165141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.165176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.165394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.165430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.165569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.165604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.165903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.165948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.166087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.166122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.166403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.166442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.166651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.166686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.166877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.166911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.167191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.167242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.167391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.167427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.167614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.167649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.167807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.167841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.168082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.168117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.168305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.168342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.168564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.168600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.168795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.168829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.169030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.169065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.169281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.169317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.169574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.169609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.169819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.169854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.170104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.170139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.170353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.170389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.170595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.170629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.170839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.170874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.171101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.171136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.171350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.171385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.171639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.171673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.171956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.171992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.172289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.172325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.172480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.172514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.172855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.172938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.173180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.173239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.173507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.173543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.173769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.173805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.174062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.174097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.174329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.174367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.174572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.174607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.174837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.174871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.175064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-11-20 19:04:29.175099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-11-20 19:04:29.175293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.175327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.175533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.175568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.175857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.175890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.176116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.176151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.176447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.176483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.176707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.176743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.176975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.177011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.177145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.177179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.177389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.177424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.177560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.177595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.177809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.177845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.178054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.178090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.178288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.178325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.178514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.178548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.178746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.178782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.179061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.179095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.179284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.179320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.179608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.179645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.179922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.179964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.180245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.180281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.180488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.180523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.180728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.180763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.180889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.180923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.181061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.181096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.181237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.181273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.181405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.181439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.181584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.181619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.181916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.181951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.182158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.182192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.182424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.182464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.182611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.182646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.182780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.182814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.182950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.182985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.183193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.183242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.183387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.183422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.183725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.183760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.184039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.184073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.184307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.184351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.184593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.184629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.184813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.184851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.185113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.185150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.185445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.185480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.185749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.185784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.186065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.186099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.186319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.186355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.186565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.186606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.186869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.186904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.187184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.187227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.187433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.187467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.187583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.187618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.187900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.187934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.188159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.188194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.188422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.188457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.188610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.188645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.188912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.188947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.189153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.189188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.189434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.189471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.189598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.189633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.189925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.189959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.190169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.190215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.190434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.190469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.190697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.190734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.191019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.191054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.191271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.191307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-11-20 19:04:29.191498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-11-20 19:04:29.191532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.191675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.191710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.191915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.191950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.192158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.192193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.192455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.192490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.192704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.192739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.192950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.192984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.193271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.193308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.193515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.193556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.193787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.193821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.194010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.194045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.194353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.194387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.194656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.194691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.195021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.195295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.195332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.195618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.195654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.195935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.195970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.196254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.196288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.196482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.196517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.196793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.196827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.197112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.197147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.197304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.197340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.197629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.197664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.197901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.197937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.198212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.198248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.198462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.198496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.198718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.198755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.199034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.199069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.199281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.199318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.199599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.199634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.199842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.199877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.200087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.200124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.200337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.200373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.200631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.200666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.200867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.200903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.201177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.201218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.201508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.201543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.201679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.201719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.201924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.201960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.202256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.202293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.202550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.202586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.202732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.202768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.202967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.203001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.203230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.203269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.203554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.203590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.203860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.203898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.204189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.204234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.204432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.204468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.204591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.204625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.204816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.204857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.205093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.205130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.205386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.205423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.205718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.205753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.206059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.206094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.206244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.206280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.206512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.206546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.206821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.206855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.207091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.207125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.207349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.207386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.207640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.207674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.207873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.207908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.208138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.208174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.208386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.208421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.208719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.208753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.209026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.209061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.209254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.209289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.209586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.209621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.209809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.209844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.210117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.210152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.210471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.210507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.210764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.210799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.211103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.211138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.211446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.211483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.211748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-11-20 19:04:29.211782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-11-20 19:04:29.212075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.212110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.212331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.212366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.212518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.212559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.212847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.212881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.213126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.213160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.213348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.213383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.213590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.213624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.213927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.213961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.214229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.214265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.214466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.214500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.214760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.214795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.215014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.215049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.215287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.215322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.215614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.215650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.215921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.215955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.216245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.216280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.216454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.216488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.216679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.216713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.217039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.217074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.217271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.217306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.217590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.217625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.217813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.217848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.217977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.218012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.218195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.218241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.218520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.218554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.218764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.218799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.219061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.219095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.219237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.219272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.219462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.219496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.219698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.219738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.220002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.220037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.220320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.220356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.220585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.220621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.220879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.220914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.221130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.221165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.221455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.221492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.221705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.221740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.221995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.222031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.222286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.222321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.222574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.222609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.222838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.222872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.223129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.223165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.223456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.223493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.223701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.223736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.224024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.224058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.224258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.224294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.224573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.224608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.224744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.224779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.225078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.225112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.225377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.225413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.225700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.225734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.226045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.226079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.226351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.226388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.226653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.226688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.226944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.226978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.227176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.227221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.227520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.227555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.227845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.227881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.228155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.228191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.228472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.228507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.228789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.228823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.229105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.229140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.229348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.229383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-11-20 19:04:29.229508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-11-20 19:04:29.229542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.229851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.229886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.230084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.230118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.230298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.230335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.230591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.230626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.230936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.230971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.231158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.231194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.231413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.231450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.231729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.231763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.231893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.231928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.232135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.232170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.232448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.232494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.232705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.232741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.232999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.233033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.233331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.233368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.233656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.233690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.233900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.233935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.234226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.234263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.234533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.234568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.234761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.234796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.234915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.234950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.235166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.235199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.235359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.235395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.235653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.235688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.235893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.235928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.236459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.236501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.236764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.236804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.237007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.237043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.237242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.237279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.237484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.237519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.237780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.237814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.238075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.238111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.238302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.238339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.238484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.238518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.238772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.238816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.238946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.238980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.239099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.239133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.239343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.239379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.239580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.239614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.239821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.239857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.240045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.240080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.240339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.240376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.240657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.240692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.240889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.240924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.241082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.241118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.241383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.241419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.241703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.241738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.241996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.242031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.242175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.242219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.242425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.242460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.242672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.242707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.242998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.243114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.243149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.243414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.243450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.243640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.243676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.243953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.243987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.244244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.244281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.244489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.244524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.244665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.244700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.244887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.245138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.245174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.245446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.245487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.245688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.245723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.245860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.245895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.246091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.246125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.246386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.246423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.246719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.246754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-11-20 19:04:29.246961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-11-20 19:04:29.246996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.247186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.247249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.247506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.247541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.247822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.247856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.248044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.248078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.248279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.248316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.248461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.248496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.248624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.248659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.248970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.249006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.249135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.249170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.249407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.249442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.249564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.249598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.249869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.249904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.250182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.250227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.250367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.250402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.250681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.250716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.250918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.250953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.251236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.251273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.251462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.251497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.251752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.251786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.251988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.252022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.252236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.252279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.252503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.252747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.252783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.252934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.252968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.253182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.253233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.253370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.253405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.253664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.253699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.253889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.253924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.254212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.254248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.254363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.254397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.254607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.254641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.254850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.254884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.255143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.255179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.255474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.255510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.255868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.255948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.256239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.256279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.256489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.256525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.256747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.256783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.256919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.256952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.257187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.257234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.257449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.257484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.257676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.257711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.257910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.257945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.258227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.258265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.258480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.258515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.258696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.258730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.258979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.259015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.259224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.259270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.259467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.259502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.259731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.259766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.259993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.260028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.260230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.260268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.260416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.260451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.260574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.260608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.260757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.260792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.261007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.261042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.261168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.261220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.261344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.261380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.261577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.261612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.261864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.261899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.262160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.262195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.262421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.262457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.262645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-11-20 19:04:29.262679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-20 19:04:29.262897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.262931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.263124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.263160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.263372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.263408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.263596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.263631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.263763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.263797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.263992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.264026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.264148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.264182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.264317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.264352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.264553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.264587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.264788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.264823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.265043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.265078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.265271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.265307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.265436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.265472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.265600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.265634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.265844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.265878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.265998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.266033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.266227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.266263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.266375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.266409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.266590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.266625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.266747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.266782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.266912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.266947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.267139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.267174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.267436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.267472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.267748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.267783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.267995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.268034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.268256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.268292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.268573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.268608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.268768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.268802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.269003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.269037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.269241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.269276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.269468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.269503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.269631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.269665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.269792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.269826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.270034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.270067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.270270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.270305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.270484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.270518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.270705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.270739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.270924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.270959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.271147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.271182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.271376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.271410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.271525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.271560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.271812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.271845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.272100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.272134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.272280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.272314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.272513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.272548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.272742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.272776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.273033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.273068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.273263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.273299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.273520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.273554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.273741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.273776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.273918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.273951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.274081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.274117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.274249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.274284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.274413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.274446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.274638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.274673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.274862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.274896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.275073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.275107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.275230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.275266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.275455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.275489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.275675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.275709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.275914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.275948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.276138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.276173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.276388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.276423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.276555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.276589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.276841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.276881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.277079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.277114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.277336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.277371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.277662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.277696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-20 19:04:29.277964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-11-20 19:04:29.277998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.278245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.278281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.278417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.278451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.278581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.278615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.278888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.278923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.279107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.279142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.279400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.279435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.279547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.279581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.279831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.279866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.280129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.280163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.280387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.280423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.280643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.280677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.280789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.280823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.281025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.281061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.281240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.281277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.281463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.281497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.281690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.281724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.281921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.281955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.282091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.282311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.282346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.282472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.282506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.282634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.282669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.282916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.282949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.283161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.283196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.283428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.283462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.283602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.283636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.283762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.284017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.284050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.284272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.284308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.284432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.284466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.284689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.284723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.284839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.284874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.285145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.285179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.285335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.285369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.285502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.285536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.285717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.285750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.286018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.286057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.286195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.286241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.286429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.286463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.286701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.286735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.286955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.286989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.287102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.287136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.287439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.287476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.287691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.287737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.287984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.288018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.288270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.288306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.288507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.288541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.288814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.288849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.289118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.289151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.289344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.289379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.289656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.289690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.289805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.289839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.290032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.290065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.290262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.290298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.290500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.290534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.290782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.290816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.290998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.291033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.291230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.291264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.291444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.291477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.291745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.291777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.292003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.292037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.292242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.292278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.292422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.292454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.292712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.292786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.293001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.293040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.293169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.293218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-20 19:04:29.293399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-11-20 19:04:29.293433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.293643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.293808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.293841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.294057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.294091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.294300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.294335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.294526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.294560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.294808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.294842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.294978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.295012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.295198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.295239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.295430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.295464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.295711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.295745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.295969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.296002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.296192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.296240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.296387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.296422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.296626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.296659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.296939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.296974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.297249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.297285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.297475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.297509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.297724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.297758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.298057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.298092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.298247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.298283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.298438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.298471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.298586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.298621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.298762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.298795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.298997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.299030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.299230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.299266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.299461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.299494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.299679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.299713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.299898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.299932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.300050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.300084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.300277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.300313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.300578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.300801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.300835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.300961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.300995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.301186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.301233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.301516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.301551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.301801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.301835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.301968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.302008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.302160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.302194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.302424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.302458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.302584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.302618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.302818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.302851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.303065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.303099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.303223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.303258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.303549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.303584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.303707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.303741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.303983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.304018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.304192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.304238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.304442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.304476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.304610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.304643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.304760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.304793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.304927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.304961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.305253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.305288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.305411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.305445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.305570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.305604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.305848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.305882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.306071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.306106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.306306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.306342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.306619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.306652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.306857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.306892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.307089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.307123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-11-20 19:04:29.307377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-11-20 19:04:29.307413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.307604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.307637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.307830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.307864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.308009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.308042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.308253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.308288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.308545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.308578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.308777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.308810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.309084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.309118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.309301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.309337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.309527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.309559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.309807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.309842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.309947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.309981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.310230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.310266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.310490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.310523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.310660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.310694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.310824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.310857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.310984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.311024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.311211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.311246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.311490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.311524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.311719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.311753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.311963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.311998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.312269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.312305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.312587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.312620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.312875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.312909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.313047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.313081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.313212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.313247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.313437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.313469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.313667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.313701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.313906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.313940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.314146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.314180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.314375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.314410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.314673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.314707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.314836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.314870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.315049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.315084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.315293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.315334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.315531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.315564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.315815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.315849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.315970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.316005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.316215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.316249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.316529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.316563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.316787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.316822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.316945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.316978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.317159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.317193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.317401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.317436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.317637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.317671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.317916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.317948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.318061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.318096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.318307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.318344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.318519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.318552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.318738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.318772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.318949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.319000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.319138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.319172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.319429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.319464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.319598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.319631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.319806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.319839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.320090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.320124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.320316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.320357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.320482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.320515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.320718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.320752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.320970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.321003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.321230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.321266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.321405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.321439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.321553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.321586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.321789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.321823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.322042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.322076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.322286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.322321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.322500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.322533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.322656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.322691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.322890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-11-20 19:04:29.322922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-11-20 19:04:29.323163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.323197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.323372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.323408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.323604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.323637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.323900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.323933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.324060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.324094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.324290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.324325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.324500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.324533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.324717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.324751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.324928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.324961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.325093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.325127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.325256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.325291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.325522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.325556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.325753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.325786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.326032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.326066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.326288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.326324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.326540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.326574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.326770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.326804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.326945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.326979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.327267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.327302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.327484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.327518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.327758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.327791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.328033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.328066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.328247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.328282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.328499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.328533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.328741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.328776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.328968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.329001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.329270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.329305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.329437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.329477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.329689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.329723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.329874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.329908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.330153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.330188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.330375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.330409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.330591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.330624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.330729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.330763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.330964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.330999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.331135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.331169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.331361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.331397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.331538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.331572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.331777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.331811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.332000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.332035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.332251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.332288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.332475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.332508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.332647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.332681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.332862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.332896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.333092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.333125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.333241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.333277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.333396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.333429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.333608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.333641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.333836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.333869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.334064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.334099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.334240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.334276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.334458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.334492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.334736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.334769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.335037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.335071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.335277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.335311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.335581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.335614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.335801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.335835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-11-20 19:04:29.336076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-11-20 19:04:29.336109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3803758 Killed "${NVMF_APP[@]}" "$@" 00:27:07.051 [2024-11-20 19:04:29.336401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.336437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.336562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.336596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.336747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.336782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:07.051 [2024-11-20 19:04:29.336974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.337008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.337192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.337266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.337398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.337433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:07.051 [2024-11-20 19:04:29.337621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.337655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.337840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.337881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b9 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.051 0 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.338036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.051 [2024-11-20 19:04:29.338247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.338283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.338462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.338497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.338608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.338640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.338780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.338815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.339095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.339131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.339262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.339298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.339407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.339440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.339594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.339627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.339811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.339845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.340093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.340127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.340396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.340431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.340567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.340601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.340893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.340927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.341102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.341136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.341277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.341312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.341423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.341457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.341721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.341755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.341884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.341917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.342118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.342150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.342290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.342322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.342534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.342565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.342763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.342797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.342935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.342967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.343152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.343186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.343376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.343416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.343534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.343568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.343712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.343745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.343959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.343992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.344197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.344242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.344432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.344465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.344662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.344696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-11-20 19:04:29.344888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.344922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3804623 00:27:07.051 [2024-11-20 19:04:29.345193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-11-20 19:04:29.345252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.345474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3804623 00:27:07.329 [2024-11-20 19:04:29.345509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:07.329 [2024-11-20 19:04:29.345752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.345787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3804623 ']' 00:27:07.329 [2024-11-20 19:04:29.346028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.346061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.329 [2024-11-20 19:04:29.346195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.346244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.329 [2024-11-20 19:04:29.346512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.346547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.346736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.346770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.329 [2024-11-20 19:04:29.346892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.346926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.329 [2024-11-20 19:04:29.347185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.347232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.329 [2024-11-20 19:04:29.347458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.347492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.347614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.347647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.347786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.347820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.348075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.348108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.348283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.348316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.348521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.348561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.348792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.348981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.349014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.349213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.349248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.349446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.349479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.349613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.349649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.349803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.349836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.350049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.350081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.350294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.350332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.350525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.350559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.350751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.350786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.350907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.350940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.329 [2024-11-20 19:04:29.351134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.329 [2024-11-20 19:04:29.351168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.329 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.351440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.351475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.351652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.351688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.351883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.351916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.352104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.352138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.352259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.352295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.352519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.352552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.352727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.352760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.352883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.352917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.353128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.353162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.353379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.353414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.353656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.353688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.353867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.353901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.354021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.354055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.354346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.354381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.354602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.354635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.354827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.354862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.354995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.355027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.355215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.355250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.355378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.355410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.355544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.355578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.355764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.355797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.356003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.356037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.356229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.356265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.356457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.356490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.356673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.356707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.356818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.356850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.357092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.357126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.357241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.357281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.357528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.357561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.357664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.357695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.357818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.357852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.358045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.358078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.358216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.358251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.358443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.358476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.358734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.330 [2024-11-20 19:04:29.358769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.330 qpair failed and we were unable to recover it. 00:27:07.330 [2024-11-20 19:04:29.358903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.358936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.359114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.359147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.359342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.359376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.359583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.359617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.359892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.359925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.360100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.360132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.360300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.360438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.360472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.360599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.360632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.360749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.360783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.360914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.360947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.361189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.361256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.361455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.361492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.361802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.361835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.362050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.362085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.362261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.362296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.362485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.362518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.362726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.362761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.362894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.362928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.363180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.363222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.363479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.363513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.363702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.363737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.363852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.363886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.364044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.364078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.364214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.364250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.364515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.364549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.364664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.364698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.364825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.364858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.365057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.365091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.365283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.365320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.365590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.365622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.365732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.365766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.365876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.365922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.366189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.366232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.366427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.331 [2024-11-20 19:04:29.366460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.331 qpair failed and we were unable to recover it. 00:27:07.331 [2024-11-20 19:04:29.366582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.366616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.366745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.366778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.366909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.366943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.367079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.367114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.367296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.367331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.367455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.367489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.367638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.367671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.367858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.367892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.368087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.368119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.368239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.368274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.368477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.368510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.368659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.368692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.368937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.368970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.369147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.369180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.369336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.369370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.369476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.369509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.369623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.369655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.369861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.369894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.370134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.370167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.370293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.370327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.370439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.370472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.370651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.370684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.370793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.370826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.370998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.371031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.371226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.371262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.371500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.371534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.371641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.371675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.371827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.371859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.371973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.372006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.372132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.372167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.332 qpair failed and we were unable to recover it. 00:27:07.332 [2024-11-20 19:04:29.372360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.332 [2024-11-20 19:04:29.372394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.372654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.372688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.372882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.372916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.373026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.373059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.373246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.373280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.373575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.373609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.373829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.373876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.374004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.374047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.374229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.374264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.374481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.374515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.374766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.374800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.374990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.375023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.375155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.375188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.375370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.375403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.375541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.375574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.375698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.375732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.375854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.375887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.376013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.376046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.376296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.376331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.376444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.376486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.376611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.376645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.376834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.376867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.376991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.377025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.377294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.377330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.377469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.377502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.377742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.377776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.377959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.377992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.378111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.378143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.378288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.378322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.378498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.378532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.378671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.378704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.378880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.378913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.379023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.379057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.379181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.379224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.379533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.379609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.379910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.379990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.380150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.333 [2024-11-20 19:04:29.380189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.333 qpair failed and we were unable to recover it. 00:27:07.333 [2024-11-20 19:04:29.380459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.380496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.380634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.380668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.380790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.380824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.381068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.381102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.381297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.381333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.381466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.381500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.381693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.381727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.381931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.381964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.382096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.382130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.382305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.382340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.382452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.382496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.382618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.382651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.382901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.382934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.383177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.383234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.383476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.383510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.383627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.383661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.383872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.383904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.384159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.384193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.384445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.384479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.384699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.384732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.384923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.384957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.385086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.385120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.385301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.385335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.385442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.385475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.385657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.385691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.385912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.385946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.386051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.386084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.386215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.386250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.386365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.386399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.386591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.386624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.386831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.386866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.334 [2024-11-20 19:04:29.386989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.334 [2024-11-20 19:04:29.387023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.334 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.387219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.387254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.387390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.387424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.387551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.387584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.387691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.387724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.387905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.387938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.388143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.388189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.388400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.388434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.388647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.388681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.388817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.388849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.389039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.389072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.389278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.389314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.389490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.389522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.389718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.389751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.389924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.389958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.390150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.390185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.390456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.390490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.390728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.390762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.390964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.390998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.391138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.391182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.391409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.391443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.391707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.391741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.391924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.391958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.392224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.392260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.392387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.392421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.392522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.392759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.392793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.392917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.392950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.393073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.393108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.393292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.393327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.393516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.393550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.393684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.393717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.393822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.393855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.394047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.394082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.394326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.335 [2024-11-20 19:04:29.394360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.335 qpair failed and we were unable to recover it. 00:27:07.335 [2024-11-20 19:04:29.394478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.394511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.394657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.394691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.394827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.394860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.394964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.394997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.395128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.395127] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:27:07.336 [2024-11-20 19:04:29.395161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b9[2024-11-20 19:04:29.395166] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:0 with addr=10.0.0.2, port=4420 00:27:07.336 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.395291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.395323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.395568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.395598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.395862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.395893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.396009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.396039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.396219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.396253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.396508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.396541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.396735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.396768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.396884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.396917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.397102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.397136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.397345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.397378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.397497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.397530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.397723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.397757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.397935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.397968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.398152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.398185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.398384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.398418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.398638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.398672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.398849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.398881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.399014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.399048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.399220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.399259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.399370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.399403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.399582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.399616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.399752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.399784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.400025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.400058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.400186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.400227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.400406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.400439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.400652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.400685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.400961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.400994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.401100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.401133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.401343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.336 [2024-11-20 19:04:29.401377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.336 qpair failed and we were unable to recover it. 00:27:07.336 [2024-11-20 19:04:29.401504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.401537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.401740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.401773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.401967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.402000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.402215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.402251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.402498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.402531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.402733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.402766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.403031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.403065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.403191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.403234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.403422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.403456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.403672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.403706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.403913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.403947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.404163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.404196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.404445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.404478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.404687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.404720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.404847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.404881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.405081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.405115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.405236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.405272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.405395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.405428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.405559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.405592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.405788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.405819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.405925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.405959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.406157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.406190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.406320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.406354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.406477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.406510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.406617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.406650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.406930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.406963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.407182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.407379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.407412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.407558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.407591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.407706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.407744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.407952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.407985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.408258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.408293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.408416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.408448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.408688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.408722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.408967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.409000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.409193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.409247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.337 [2024-11-20 19:04:29.409382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.337 [2024-11-20 19:04:29.409415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.337 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.409524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.409558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.409799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.409833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.410018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.410052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.410169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.410213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.410342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.410376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.410488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.410522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.410765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.410799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.411015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.411049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.411224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.411259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.411434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.411468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.411646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.411681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.411864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.411896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.412021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.412054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.412224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.412259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.412504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.412537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.412724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.412759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.413032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.413065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.413199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.413253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.413513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.413546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.413744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.413778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.413958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.413991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.414119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.414153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.414433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.414468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.414658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.414691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.414974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.415007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.415211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.415245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.415365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.415397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.415579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.415613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.415797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.415830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.415946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.415980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.416100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.416133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.416307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.338 [2024-11-20 19:04:29.416342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.338 qpair failed and we were unable to recover it. 00:27:07.338 [2024-11-20 19:04:29.416462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.416501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.416691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.416724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.416895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.416926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.417098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.417131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.417312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.417347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.417534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.417566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.417845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.417878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.418086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.418120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.418317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.418352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.418572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.418606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.418818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.418851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.419019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.419052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.419253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.419287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.419423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.419455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.419575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.419610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.419735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.419768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.420006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.420041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.420221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.420255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.420391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.420425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.420555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.420588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.420786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.420818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.421023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.421057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.421248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.421283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.421544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.421577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.421817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.421854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.422113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.422147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.422342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.422377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.422499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.422533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.422741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.422777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.422952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.422986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.423187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.423240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.423436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.423469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.423588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.423622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.423796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.423829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.424014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.424048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.339 [2024-11-20 19:04:29.424245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.339 [2024-11-20 19:04:29.424279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.339 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.424465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.424499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.424740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.424774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.424896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.424930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.425112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.425145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.425346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.425387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.425630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.425663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.425919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.425953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.426225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.426261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.426445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.426479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.426656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.426690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.426882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.426916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.427182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.427224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.427367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.427400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.427513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.427547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.427672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.427706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.428028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.428061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.428246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.428281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.428469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.428503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.428697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.428731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.428838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.428871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.429059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.429093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.429269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.429304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.429503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.429536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.429648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.429681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.429941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.429975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.430090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.430124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.430330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.430478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.430511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.430750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.430784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.430890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.430923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.431180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.431223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.431342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.431377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.431591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.431624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.340 [2024-11-20 19:04:29.431831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.340 [2024-11-20 19:04:29.431866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.340 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.432109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.432143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.432285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.432320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.432437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.432469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.432595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.432629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.432811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.432845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.432959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.432993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.433128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.433162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.433424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.433459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.433582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.433616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.433832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.433867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.433973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.434013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.434137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.434171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.434357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.434429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.434570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.434608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.434741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.434774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.434888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.434923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.435027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.435061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.435262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.435298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.435489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.435524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.435705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.435737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.435948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.435981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.436168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.436200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.436391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.436426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.436605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.436637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.436825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.436859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.437037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.437070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.437312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.437348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.437546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.437579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.437773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.437807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.437912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.437946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.438066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.438099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.438286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.438321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.438586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.438619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.438803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.341 [2024-11-20 19:04:29.438837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.341 qpair failed and we were unable to recover it. 00:27:07.341 [2024-11-20 19:04:29.439031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.439064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.439239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.439274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.439396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.439430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.439656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.439895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.439928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.440114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.440148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.440267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.440301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.440410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.440443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.440555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.440588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.440767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.440799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.441063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.441095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.441221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.441257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.441364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.441397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.441612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.441646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.441831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.441863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.442047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.442080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.442257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.442290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.442475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.442508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.442709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.442743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.443030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.443063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.443324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.443358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.443476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.443509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.443681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.443714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.443856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.443889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.444134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.444168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.444356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.444392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.444507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.444539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.444743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.444777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.444961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.444994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.445238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.445273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.445483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.445517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.445779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.445813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.445947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.445980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.446234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.446269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.446442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.446475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.446658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.446693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.446801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.342 [2024-11-20 19:04:29.446836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.342 qpair failed and we were unable to recover it. 00:27:07.342 [2024-11-20 19:04:29.446954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.446987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.447179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.447224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.447416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.447451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.447632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.447665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.447794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.447829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.447963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.447995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.448170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.448214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.448376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.448446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.448659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.448696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.448892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.448927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.449060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.449094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.449344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.449381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.449562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.449596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.449785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.449819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.449995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.450028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.450144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.450176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.450380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.450414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.450543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.450583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.450759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.450791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.450900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.450934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.451107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.451156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.451353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.451388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.451569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.451602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.451793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.451827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.452036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.452069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.452285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.452320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.452565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.452599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.452719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.452751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.452931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.452964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.453068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.453100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.453306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.343 [2024-11-20 19:04:29.453339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.343 qpair failed and we were unable to recover it. 00:27:07.343 [2024-11-20 19:04:29.453552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.453586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.453792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.453825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.454067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.454101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.454248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.454283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.454417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.454451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.454556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.454590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.454718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.454751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.454977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.455011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.455185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.455227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.455429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.455462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.455599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.455632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.455860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.455893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.456068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.456101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.456293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.456327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.456453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.456486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.456609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.456643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.456823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.456897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.457149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.457231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.457433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.457472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.457677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.457712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.457900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.457940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.458056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.458091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.458309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.458347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.458598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.458633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.458748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.458782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.458904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.458937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.459126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.459159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.459295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.459331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.459528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.459560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.459803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.459836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.460030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.460064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.460251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.460286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.460464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.460497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.460717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.460750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.460960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.460993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.461189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.461233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.461345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.344 [2024-11-20 19:04:29.461379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.344 qpair failed and we were unable to recover it. 00:27:07.344 [2024-11-20 19:04:29.461574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.461607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.461793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.461825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.461943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.461976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.462159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.462193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.462397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.462431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.462671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.462706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.462899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.462933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.463120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.463154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.463346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.463381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.463631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.463664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.463809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.463842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.464026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.464059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.464304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.464339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.464514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.464547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.464671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.464705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.464894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.464927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.465132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.465166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.465300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.465336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.465469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.465502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.465678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.465719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.466005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.466039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.466164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.466197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.466459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.466494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.466680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.466714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.466896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.466930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.467061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.467095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.467274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.467309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.467416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.467449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.467634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.467669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.467797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.467831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.468099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.468134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.468260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.468295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.468480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.468515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.468696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.468730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.468938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.469104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.469137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.345 [2024-11-20 19:04:29.469269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.345 [2024-11-20 19:04:29.469305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.345 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.469432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.469466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.469639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.469672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.469942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.469978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.470154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.470188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.470378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.470411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.470603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.470638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.470775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.470809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.470988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.471023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.471191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.471236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.471364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.471406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.471604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.471638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.471763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.471797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.471987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.472021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.472133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.472167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.472301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.472337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.472526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.472561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.472693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.472855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.472889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.473065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.473101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.473347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.473382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.473584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.473619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.473799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.473834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.474010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.474051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.474228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.474263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.474401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.474435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.474627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.474662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.474849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.474884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.475001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.475036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.475234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.475269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.475379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.475414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.475532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.475566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.475812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.475845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.476020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.476054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.476249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.476284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.476403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.476437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.476487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.346 [2024-11-20 19:04:29.476652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.476687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.346 qpair failed and we were unable to recover it. 00:27:07.346 [2024-11-20 19:04:29.476887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.346 [2024-11-20 19:04:29.476922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.477039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.477073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.477255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.477290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.477405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.477440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.477557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.477591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.477705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.477739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.477942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.477975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.478152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.478186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.478325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.478360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.478480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.478515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.478783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.478818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.479021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.479055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.479295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.479331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.479549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.479590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.479772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.479806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.479914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.479949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.480135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.480170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.480285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.480323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.480514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.480547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.480720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.480754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.480942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.480975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.481096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.481130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.481309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.481345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.481537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.481572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.481739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.481772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.481956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.481990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.482178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.482222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.482417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.482645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.482679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.482883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.482918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.483120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.483153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.483286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.483321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.483499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.483533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.483640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.483672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.483875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.483908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.484041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.484074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.347 qpair failed and we were unable to recover it. 00:27:07.347 [2024-11-20 19:04:29.484349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.347 [2024-11-20 19:04:29.484384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.484509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.484542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.484745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.484780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.485002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.485035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.485266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.485304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.485446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.485480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.485652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.485686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.485815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.485849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.485994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.486027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.486272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.486307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.486454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.486488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.486601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.486637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.486832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.486866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.487009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.487043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.487231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.487266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.487464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.487498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.487747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.487782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.487907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.487949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.488128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.488162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.488362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.488398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.488581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.488615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.488823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.488858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.489062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.489096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.489212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.489246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.489502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.489536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.489709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.489744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.489915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.489949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.490194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.490240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.490383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.490416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.490657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.490690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.490936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.490969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.491162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.491196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.491342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.348 [2024-11-20 19:04:29.491378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.348 qpair failed and we were unable to recover it. 00:27:07.348 [2024-11-20 19:04:29.491621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.491945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.491979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.492112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.492146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.492286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.492321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.492447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.492482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.492613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.492647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.492762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.492795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.492968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.493001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.493275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.493310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.493555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.493588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.493720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.493754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.493959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.494003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.494226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.494266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.494538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.494573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.494759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.494794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.494986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.495019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.495237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.495274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.495491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.495525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.495672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.495706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.495919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.495953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.496151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.496186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.496398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.496432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.496730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.496764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.496901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.496934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.497135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.497175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.497389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.497423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.497611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.497646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.497752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.497784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.497907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.498226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.498262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.498448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.498481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.498749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.498782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.498955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.498988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.499177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.499220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.499420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.499454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.499644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.499677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.349 [2024-11-20 19:04:29.499871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.349 [2024-11-20 19:04:29.499904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.349 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.500153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.500187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.500403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.500438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.500545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.500578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.500777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.500811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.501054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.501088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.501200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.501464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.501499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.501679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.501714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.501909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.501942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.502147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.502181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.502296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.502332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.502524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.502557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.502821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.502856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.503127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.503160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.503407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.503450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.503567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.503602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.503714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.503748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.503954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.503988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.504266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.504303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.504433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.504467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.504606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.504641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.504846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.504880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.505001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.505037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.505218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.505254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.505447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.505481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.505617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.505652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.505828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.505863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.506038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.506073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.506274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.506309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.506575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.506610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.506796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.506830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.506963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.506997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.507170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.507212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.507400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.507434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.507617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.507654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.507848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.507882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.508021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.508056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.350 qpair failed and we were unable to recover it. 00:27:07.350 [2024-11-20 19:04:29.508194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.350 [2024-11-20 19:04:29.508241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.508417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.508451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.508558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.508592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.508767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.508802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.508921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.508961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.509132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.509168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.509362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.509398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.509591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.509626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.509806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.509841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.510020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.510055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.510182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.510226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.510424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.510459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.510633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.510668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.510923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.510957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.511133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.511168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.511393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.511433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.511665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.511699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.511976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.512012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.512293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.512329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.512457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.512490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.512752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.512786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.513052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.513086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.513275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.513311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.513449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.513484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.513666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.513700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.513987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.514022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.514266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.514305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.514425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.514458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.514672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.514709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.514922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.514956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.515058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.515093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.515381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.515422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.515611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.515649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.515830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.515866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.516063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.516098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.516279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.516316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.351 [2024-11-20 19:04:29.516438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.351 [2024-11-20 19:04:29.516473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.351 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.516739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.516775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.516967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.517001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.517137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.352 [2024-11-20 19:04:29.517143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.517165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.352 [2024-11-20 19:04:29.517174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.352 [2024-11-20 19:04:29.517174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b9[2024-11-20 19:04:29.517181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.352 0 with addr=10.0.0.2, port=4420 00:27:07.352 [2024-11-20 19:04:29.517190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.517358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.517391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.517662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.517696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.517961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.517999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.518133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.518168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.518385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.518419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.518697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.518731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.518870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.518791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:07.352 [2024-11-20 19:04:29.518904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.518897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:07.352 [2024-11-20 19:04:29.519019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.519022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.352 [2024-11-20 19:04:29.519051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.519023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:07.352 [2024-11-20 19:04:29.519254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.519288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.519535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.519575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.519754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.519787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.519977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.520011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.520255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.520291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.520562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.520596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.520811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.520848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.521031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.521065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.521260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.521296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.521415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.521449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.521642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.521676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.521870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.521903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.522086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.522120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.522365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.522401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.522534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.522568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.522758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.522791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.522979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.523013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.523186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.523228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.523424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.523459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.523568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.523603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.523739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.523772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.524023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.524058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.352 qpair failed and we were unable to recover it. 00:27:07.352 [2024-11-20 19:04:29.524258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.352 [2024-11-20 19:04:29.524300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.524536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.524570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.524848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.524882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.525097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.525132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.525350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.525384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.525501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.525534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.525801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.525835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.526086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.526120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.526243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.526277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.526414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.526447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.526556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.526589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.526853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.526893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.527087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.527122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.527247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.527283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.527487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.527521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.527767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.527801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.528045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.528080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.528276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.528312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.528487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.528521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.528736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.528771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.528968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.529005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.529179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.529225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.529402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.529437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.529658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.529693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.529824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.529860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.530042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.530077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.530255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.530292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.530473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.530508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.530655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.530691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.530804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.530838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.531036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.531072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.531284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.531320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.531570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.531604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.531806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.353 [2024-11-20 19:04:29.531840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.353 qpair failed and we were unable to recover it. 00:27:07.353 [2024-11-20 19:04:29.531962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.531996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.532119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.532152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.532290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.532327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.532516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.532550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.532700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.532737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.532913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.532947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.533183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.533226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.533517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.533552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.533733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.533766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.533890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.533924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.534194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.534240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.534385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.534418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.534548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.534582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.534785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.534818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.535000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.535035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.535157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.535191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.535397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.535431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.535645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.535689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.535902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.535936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.536128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.536162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.536423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.536461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.536656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.536692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.536872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.536906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.537084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.537118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.537310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.537346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.537519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.537552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.537702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.537736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.537980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.538015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.538139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.538172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.538452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.538501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.538702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.538736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.538945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.538979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.539222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.539258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.539521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.539555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.539674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.539709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.539851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-11-20 19:04:29.539885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.354 qpair failed and we were unable to recover it. 00:27:07.354 [2024-11-20 19:04:29.540011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.540045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.540171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.540215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.540406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.540440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.540631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.540665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.540827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.540861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.541037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.541071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.541265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.541301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.541472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.541505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.541689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.541723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.541929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.541964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.542195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.542238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.542356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.542390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.542512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.542547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.542847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.542882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.543080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.543117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.543254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.543293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.543541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.543577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.543758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.543794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.544039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.544075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.544212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.544247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.544381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.544415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.544669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.544710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.544909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.544944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.545131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.545167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.545354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.545393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.545588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.545624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.545817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.545855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.546107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.546145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.546377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.546415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.546604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.546638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.546823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.546858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.547039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.547076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.547371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.547427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.547661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.547715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.547904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.547940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.548141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.548176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.548390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-11-20 19:04:29.548425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.355 qpair failed and we were unable to recover it. 00:27:07.355 [2024-11-20 19:04:29.548568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.548601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.548734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.548769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.548899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.548933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.549053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.549086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.549260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.549295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.549475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.549509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.549749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.549783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.549990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.550024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.550224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.550260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.550453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.550487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.550603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.550636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.550783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.550816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.551080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.551113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.551250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.551286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.551495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.551529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.551650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.551684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.551882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.551915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.552122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.552155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.552354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.552389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.552499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.552533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.552707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.552741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.553008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.553043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.553287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.553323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.553533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.553565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.553691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.553732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.553948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.554144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.554177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.554387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.554422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.554664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.554697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.555023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.555056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.555275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.555383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.555417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.555588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.555621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.555859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.555892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.556064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.556098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.556337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.556373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.356 [2024-11-20 19:04:29.556551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-11-20 19:04:29.556584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.356 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.556819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.556853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.557106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.557140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.557331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.557366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.557611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.557644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.557912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.557945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.558118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.558152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.558277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.558312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.558529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.558562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.558851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.558884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.559013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.559046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.559238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.559273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.559448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.559482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.559650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.559684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.559930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.559964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.560155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.560189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.560346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.560380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.560629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.560663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.560856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.560889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.561018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.561053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.561160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.561194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.561351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.561385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.561574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.561608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.561816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.561851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.562027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.562062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.562169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.562213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.562488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.562524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.562637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.562670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.562844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.562885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.563125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.563159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.563416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.563452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.563577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.563612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.563857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.563891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.564030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.564065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.564253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-11-20 19:04:29.564288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-11-20 19:04:29.564475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.564510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.564745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.564781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.564982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.565016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.565284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.565320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.565520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.565555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.565750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.565785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.565910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.565945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.566178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.566222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.566353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.566387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.566574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.566609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.566796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.566831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.566954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.566989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.567180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.567223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.567421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.567456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.567700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.567738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.567870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.567904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.568081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.568117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.568308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.568345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.568465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.568501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.568687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.568723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.568846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.568881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.569055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.569091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.569198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.569244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.569376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.569410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.569530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.569564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.569670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.569705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.569958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.569992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.570105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.570139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.570307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.570342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.570583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.570618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.570761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.570796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.571039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.571075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.571273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.571308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.571429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-11-20 19:04:29.571472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-11-20 19:04:29.571747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.571782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.571994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.572028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.572239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.572275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.572408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.572443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.572619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.572654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.572840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.572874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.573060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.573094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.573280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.573317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.573530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.573564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.573677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.573711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.573843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.573878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.574126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.574160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.574435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.574471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.574654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.574689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.574871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.574904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.575095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.575129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.575392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.575429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.575553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.575586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.575831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.575866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.575994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.576026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.576147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.576181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.576366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.576400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.576583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.576615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.576792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.576824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.577017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.577050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.577186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.577231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.577554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.577610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.577898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.577952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.578091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.578127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.578320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.578358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.578622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.578656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.578773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.578807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.578917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.578951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-11-20 19:04:29.579135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-11-20 19:04:29.579169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.579328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.579372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.579641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.579674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.579851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.579885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.580066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.580099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.580303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.580339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.580604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.580645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.580823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.580857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.581126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.581159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.581304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.581338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.581480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.581513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.581692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.581725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.581918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.581950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.582189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.582232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.582443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.582476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.582659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.582692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.582822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.582855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.583028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.583062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.583254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.583288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.583467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.583500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.583697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.583731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.583926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.583959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.584235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.584270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.584517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.584550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.584840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.584873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.585055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.585089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.585273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.585309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.585510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.585542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.585807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.585840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.586088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.586121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.586259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.586294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.586494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.586527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.586807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-11-20 19:04:29.587030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-11-20 19:04:29.587075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.587214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.587249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.587490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.587522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.587772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.587806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.588003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.588036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.588181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.588227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.588478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.588511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.588692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.588726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.588911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.588945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.589224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.589259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.589393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.589427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.589533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.589567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.589760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.589793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.589977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.590010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.590200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.590247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.590433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.590467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.590672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.590705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.590973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.591007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.591178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.591310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.591344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.591471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.591505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.591721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.591754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.591935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.591968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.592140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.592174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.592387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.592420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.592606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.592640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.592850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.592883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.593080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.593113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.593304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.593339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.593530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.593563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.593757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.593790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.594053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.594087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.594267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.594301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.594518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.594552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.594744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.594785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.595071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.595104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.595293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.595329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.595533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-11-20 19:04:29.595566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-11-20 19:04:29.595758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.595791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.596002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.596282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.596323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.596424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.596456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.596635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.596669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.596843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.596876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.597004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.597037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.597240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.597275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.597537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.597571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.597758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.597792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.598029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.598061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.598344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.598379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.598569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.598602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.598719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.598753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.598944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.598977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.599160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.599194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.599331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.599364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.599485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.599519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.599632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.599666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.599877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.599910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.600087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.600120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.600230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.600265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.600479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.600512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.600634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.600668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.600853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.600887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.601134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.601168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.601359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.601393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.601519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.601552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.601740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.601773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.601910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.601943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.602151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-11-20 19:04:29.602184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-11-20 19:04:29.602374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.602407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.602536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.602569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.602769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.602802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.603003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.603037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.603221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.603256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.603527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.603560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.603807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.603841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.604020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.604053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.604163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.604196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.604440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.604473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.604668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.604701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.604879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.604918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.605058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.605092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.605316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.605351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.605620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.605653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.605927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.605961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.606158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.606191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.606373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.606407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.606545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.606578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.606698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.606732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.606975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.607008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.607196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.607237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.607445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.607479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.607652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.607685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.607792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.607826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.608040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.608073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.608332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.608367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.608611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.608644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.608815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.608848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.609087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.609120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.609314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.609350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.609461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.609500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.609644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.609678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.609932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.609965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.610137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.610170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-11-20 19:04:29.610357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-11-20 19:04:29.610390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.610528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.610562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.610694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.610727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.610859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.610893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.611081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.611114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.611297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.611332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.611508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.611540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.611732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.611766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.611958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.611991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.612172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.612213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.612400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.612433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.612626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.612659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.612851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.612884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.613127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.613162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.613438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.613471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.613659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.613692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.613877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.613916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.614125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.614158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.614422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.614457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.614712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.614746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.614934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.614967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.615079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.615113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.615295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.615330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.615577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.615609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.615794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.615828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.616009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.616043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.616173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.616232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.616375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.616601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.616634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.616750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.616784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.616973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.617007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.617257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.617292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.617503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.617537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.617722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.617755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-11-20 19:04:29.618012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-11-20 19:04:29.618045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.618242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.618277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.618469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.618501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.618747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.618781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.619045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.619079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.619272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.619307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.619415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.619449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.619623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.619657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.619930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.619964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.620165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.620199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.620328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.620362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.620474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.620506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.620683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.620717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.620897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.620931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b9 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.365 0 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.621180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.621219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:07.365 [2024-11-20 19:04:29.621416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.621450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.621591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.621625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.621834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.621866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:07.365 [2024-11-20 19:04:29.622064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.622100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.365 [2024-11-20 19:04:29.622295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.622329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.622507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.622547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.622732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.622772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.622898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.622929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.623033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.623063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.623303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.623338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.623509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.623541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.623682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.623715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.623922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.623953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.624170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.624210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.624333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.624365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.624552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.624584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.624698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.624730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.624935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.624967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.625088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.625123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.625306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.625340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-11-20 19:04:29.625546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-11-20 19:04:29.625577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.625780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.625814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.626009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.626041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.626160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.626194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.626394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.626427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.626619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.626652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.626842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.626873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.627055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.627088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.627266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.627300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.627475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.627506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.627613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.627646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.627848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.627881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.628066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.628097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.628268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.628302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.628505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.628537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.628779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.628811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.628918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.628953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.629140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.629172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.629360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.629392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.629654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.629688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.629815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.629847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.630060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.630092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.630265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.630298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.630424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.630456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.630701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.630733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.630850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.630893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.631140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.631171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.631367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.631404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.631586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.631617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.631724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.631755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.631941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.631971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.632197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.632253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.632380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.632412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.632543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.632577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.632768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.632801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.633003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.633035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.633226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-11-20 19:04:29.633259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-11-20 19:04:29.633445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-11-20 19:04:29.633479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-11-20 19:04:29.633604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-11-20 19:04:29.633638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-11-20 19:04:29.633833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-11-20 19:04:29.633869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-11-20 19:04:29.634060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-11-20 19:04:29.634094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-11-20 19:04:29.634305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-11-20 19:04:29.634342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-11-20 19:04:29.634522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-11-20 19:04:29.634554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-11-20 19:04:29.634798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-11-20 19:04:29.634832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.635007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.635039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.635230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.635265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.635388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.635422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.635555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.635589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.635724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.635757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.635955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.635988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.636117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.636150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.636367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.636405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.636520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.636556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.636660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.636692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.636867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.637086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.637119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.637318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.637352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.637543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.637574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.637770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.637803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.637916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.637949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.638072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.638104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.638224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.638256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.638366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.638399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.634 [2024-11-20 19:04:29.638574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.634 [2024-11-20 19:04:29.638606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.634 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.638723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.638758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.638892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.638930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.639115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.639148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.639373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.639490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.639524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.639723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.639756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.640015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.640049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.640218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.640351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.640384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.640568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.640601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.640733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.640767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.640872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.640906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.641083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.641117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.641237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.641272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.641394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.641427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.641672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.641707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.641845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.641878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.642058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.642092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.642266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.642303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.642428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.642461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.642664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.642698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.642826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.642859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.642980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.643015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.643127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.643160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.643337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.643373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.643481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.643516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.643624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.643657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.643790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.643823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.643944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.643979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.644093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.644126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.644307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.635 [2024-11-20 19:04:29.644343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.635 qpair failed and we were unable to recover it. 00:27:07.635 [2024-11-20 19:04:29.644528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.644562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.644697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.644731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.644916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.644949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.645122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.645157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.645356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.645390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.645503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.645537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.645651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.645684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.645805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.645839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.645951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.645984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.646237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.646273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.646396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.646434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.646565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.646599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.646733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.646767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.646874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.646906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.647023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.647058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.647159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.647192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.647335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.647367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.647481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.647515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.647638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.647671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.647940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.647973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.648142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.648180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.648305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.648341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.648468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.648501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.648742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.648775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.648959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.648993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.649106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.649140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.649252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.649287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.649413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.649447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.649554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.636 [2024-11-20 19:04:29.649589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.636 qpair failed and we were unable to recover it. 00:27:07.636 [2024-11-20 19:04:29.649706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.649740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.649927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.649960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.650070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.650104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.650221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.650256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.650370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.650403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.650581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.650615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.650738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.650771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.650954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.650986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.651130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.651165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.651394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.651429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.651553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.651586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.651693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.651726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.651841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.651874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.651979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.652012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.652128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.652161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.652295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.652330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.652577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.652610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.652725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.652758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.652939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.652971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.653080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.653113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.653230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.653264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.653438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.653477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.653601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.653635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.653746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.653782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.653912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.653945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.654068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.654101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.654357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.654392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.654578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.654609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.654716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.654747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.654868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.654902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.637 [2024-11-20 19:04:29.655016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.637 [2024-11-20 19:04:29.655047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.637 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.655160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.655190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.655388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.655419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.655535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.655566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.655755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.655785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.655894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.655925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.656167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.656198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.656308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.656339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.656452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.656483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.656648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.656682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.656916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.656946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.657053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.657084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.657319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.657350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.657560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.657590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.657716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.657746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.657921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.657951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.658122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.658153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.658347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.658378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.658511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.638 [2024-11-20 19:04:29.658543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.658662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.658693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.658870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.658902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.638 [2024-11-20 19:04:29.659015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.659046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.659173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.659217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.638 [2024-11-20 19:04:29.659325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.659356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.659532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.638 [2024-11-20 19:04:29.659564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.659679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.659709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.659846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.638 [2024-11-20 19:04:29.659876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.638 qpair failed and we were unable to recover it. 00:27:07.638 [2024-11-20 19:04:29.660003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.660032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.660138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.660169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.660292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.660331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.660433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.660464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.660606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.660638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.660744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.660773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.660876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.660906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.661028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.661058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.661253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.661285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.661388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.661418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.661591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.661623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.661734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.661764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.661936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.661967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.662088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.662119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.662238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.662270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.662389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.662419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.662602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.662633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.662814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.662845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.662959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.662991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.663091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.663122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.663308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.663338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.663519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.663549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.663722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.663754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.663947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.664068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.664099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.664215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.664246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.664427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.664457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.664556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.664586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.664782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.664813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.664949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.639 [2024-11-20 19:04:29.664989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.639 qpair failed and we were unable to recover it. 00:27:07.639 [2024-11-20 19:04:29.665112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.665146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.665329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.665364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.665488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.665521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.665698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.665732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.665928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.665961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.666068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.666101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.666232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.666267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.666468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.666502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.666682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.666715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.666828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.666861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.667036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.667069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.667193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.667238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.667348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.667389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.667580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.667613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.667734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.667767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.667966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.667999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.668234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.668374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.668407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.668542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.668575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.668684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.668716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.668910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.668943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.669064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.669097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.669216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.669252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.669431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.669464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.669571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.669605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.669711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.669743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.669999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.670035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.670144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.670175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.670317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.670366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.670505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.640 [2024-11-20 19:04:29.670541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.640 qpair failed and we were unable to recover it. 00:27:07.640 [2024-11-20 19:04:29.670746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.670780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.671021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.671055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.671260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.671297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.671559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.671593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.671771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.671805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.671929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.671964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.672081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.672115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.672360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.672397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.672515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.672548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.672667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.672709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.672922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.672957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.673069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.673103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.673282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.673319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.673464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.673499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.673685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.673718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.673903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.673937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.674085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.674119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.674304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.674337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.674530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.674559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.674735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.674767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.674971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.675005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.675249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.675283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.675500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.675533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.675719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.675753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.675968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.676001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.676189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.676231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.676436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.676469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.676716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.676749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.676863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.676896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.677095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.677129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.677304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.677338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.677462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.677495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.641 [2024-11-20 19:04:29.677760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.641 [2024-11-20 19:04:29.677793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.641 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.677906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.677939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.678077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.678111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.678288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.678323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.678543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.678588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.678897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.679019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.679056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.679270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.679305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.679420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.679453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.679562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.679595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.679793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.679826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.679934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.679967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.680147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.680180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.680310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.680344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.680516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.680548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.680718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.680753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.680940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.680974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.681090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.681130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.681262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.681295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.681469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.681503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.681756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.681790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.681904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.681939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.682127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.682160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.682393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.682429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.682609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.682641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.682890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.682924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.683047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.683080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.683276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.683312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.683418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.683451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.683723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.683757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.683942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.683975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.684106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.684138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.642 [2024-11-20 19:04:29.684355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.642 [2024-11-20 19:04:29.684391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.642 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.684579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.684612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.684736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.684769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.684878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.684912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.685098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.685133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.685248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.685282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.685406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.685439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.685728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.685762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.685951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.685985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.686227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.686262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.686454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.686488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.686680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.686713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.686915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.686956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.687092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.687125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.687253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.687290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.687498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.687531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.687661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.687694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.687873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.687905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.688077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.688110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.688287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.688322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.688497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.688530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.688724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.688758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.688952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.688984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.689169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.689212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 Malloc0 00:27:07.643 [2024-11-20 19:04:29.689427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.689461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.689568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.689610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.689798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.689830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.689965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.689998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.643 [2024-11-20 19:04:29.690177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.690218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.643 [2024-11-20 19:04:29.690400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.643 [2024-11-20 19:04:29.690434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.643 qpair failed and we were unable to recover it. 00:27:07.644 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:07.644 [2024-11-20 19:04:29.690620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.690654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.644 [2024-11-20 19:04:29.690866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.690900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.691114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.644 [2024-11-20 19:04:29.691147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.691351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.691386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.691560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.691593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.691816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.691850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.692034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.692066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.692348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.692384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.692559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.692592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.692786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.692819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.693010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.693044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.693312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.693348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.693540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.693573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.693771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.693803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.693987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.694021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.694275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.694310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.694496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.694529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.694649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.694683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.694876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.694909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.695099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.695131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.695345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.695385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.695578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.695610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.695781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.695813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.696102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.696140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.696415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.696451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.696712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.696746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.696923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.644 [2024-11-20 19:04:29.697012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.697044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.644 qpair failed and we were unable to recover it. 00:27:07.644 [2024-11-20 19:04:29.697247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.644 [2024-11-20 19:04:29.697282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.697415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.697447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.697568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.697602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.697775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.697808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.697981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.698015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.698255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.698290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.698497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.698536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.698716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.698750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.698934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.698967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.699149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.699184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.699390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.699424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.699620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.699653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.699838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.699871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.700047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.700080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.700194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.700237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.700411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.700445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.700641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.700674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.700848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.700881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.701055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.701088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.701198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.701247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.701441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.701476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.701672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.701706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.701892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.701925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.702116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.702149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.702345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.702379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.702640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.702674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.645 [2024-11-20 19:04:29.702847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.645 [2024-11-20 19:04:29.702879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.645 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.703145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.703178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.703440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.703474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.703604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.703636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.703753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.703786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.703965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.703999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.704179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.704221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f741c000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.704521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.704766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.704800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.704990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.705023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.705267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.705302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.705495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.705528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.646 [2024-11-20 19:04:29.705769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.705803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.646 [2024-11-20 19:04:29.706045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.706079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.706242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.706278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.646 [2024-11-20 19:04:29.706503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.706536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.646 [2024-11-20 19:04:29.706732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.706769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.706986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.707018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.707215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.707257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.707387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.707422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.707601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.707634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.707877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.707909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.708102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.708135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.708306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.708342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.708472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.708505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.708704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.708738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.708878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.708912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.709029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.709062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.709187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.709228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.646 qpair failed and we were unable to recover it. 00:27:07.646 [2024-11-20 19:04:29.709394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.646 [2024-11-20 19:04:29.709434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.709622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.709655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.709851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.709884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.710072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.710105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.710238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.710277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.710531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.710564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.710751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.710784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.710899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.710932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.711068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.711103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.711239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.711274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.711393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.711426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.711674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.711707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.711885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.711919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.712103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.712136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.712393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.712427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.712544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.712577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7424000b90 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.712774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.712822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.712952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.712987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.713107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.713141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.713422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.713457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.713578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.713611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.647 [2024-11-20 19:04:29.713822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.713856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.714066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.714100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.647 [2024-11-20 19:04:29.714345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.714381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.647 [2024-11-20 19:04:29.714575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.714608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.647 [2024-11-20 19:04:29.714752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.714786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.715000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.715034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.715260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.715295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.647 [2024-11-20 19:04:29.715421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.647 [2024-11-20 19:04:29.715456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.647 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.715671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.715705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.715830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.715864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.716063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.716096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.716228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.716263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.716402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.716437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.716563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.716596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.716772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.716806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.717020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.717054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.717250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.717286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.717406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.717439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.717618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.717651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.717765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.717799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.717984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.718024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.718219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.718254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.718432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.718465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.718644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.718677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.718919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.718953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.719138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.719170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6aba0 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.719387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.719425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.719633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.719666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.719787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.719821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.719999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.720032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.720143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.720177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.720316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.720349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.720549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.720583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.720770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.720804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.721006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.721042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.721242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.721275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.721395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.721429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 [2024-11-20 19:04:29.721606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.721640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.648 [2024-11-20 19:04:29.721907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.648 [2024-11-20 19:04:29.721941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.648 qpair failed and we were unable to recover it. 00:27:07.648 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.648 [2024-11-20 19:04:29.722121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.722154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.722284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.722318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.649 [2024-11-20 19:04:29.722449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.722482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.722665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.649 [2024-11-20 19:04:29.722698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.722835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.722869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.723167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.723199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.723389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.723429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.723670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.723703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.723837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.723870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.723980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.724013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.724190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.724236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.724504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.724537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.724827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.724861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.725047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.649 [2024-11-20 19:04:29.725080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.725180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.649 [2024-11-20 19:04:29.727577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.649 [2024-11-20 19:04:29.727687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.649 [2024-11-20 19:04:29.727730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.649 [2024-11-20 19:04:29.727753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.649 [2024-11-20 19:04:29.727774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.649 [2024-11-20 19:04:29.727825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.649 [2024-11-20 19:04:29.737499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.649 [2024-11-20 19:04:29.737591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.649 [2024-11-20 19:04:29.737619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.649 [2024-11-20 19:04:29.737634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.649 [2024-11-20 19:04:29.737648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.649 [2024-11-20 19:04:29.737679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.649 19:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3803932 00:27:07.649 [2024-11-20 19:04:29.747455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.649 [2024-11-20 19:04:29.747522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.649 [2024-11-20 19:04:29.747540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.649 [2024-11-20 19:04:29.747551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.649 [2024-11-20 19:04:29.747559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.649 [2024-11-20 19:04:29.747581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.757439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.649 [2024-11-20 19:04:29.757499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.649 [2024-11-20 19:04:29.757512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.649 [2024-11-20 19:04:29.757519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.649 [2024-11-20 19:04:29.757525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.649 [2024-11-20 19:04:29.757539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.649 qpair failed and we were unable to recover it. 00:27:07.649 [2024-11-20 19:04:29.767468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.767551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.767565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.767573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.767579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.767594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.777505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.777560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.777574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.777581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.777587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.777602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.787535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.787591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.787605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.787612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.787618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.787633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.797581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.797641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.797654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.797661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.797668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.797683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.807604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.807690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.807704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.807711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.807717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.807732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.817671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.817779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.817792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.817802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.817808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.817823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.827594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.827648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.827661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.827668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.827675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.827690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.837683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.837740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.837754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.837761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.837768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.837783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.847697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.847763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.847777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.847784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.847790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.847805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.857780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.857886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.857899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.650 [2024-11-20 19:04:29.857906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.650 [2024-11-20 19:04:29.857912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.650 [2024-11-20 19:04:29.857929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.650 qpair failed and we were unable to recover it. 00:27:07.650 [2024-11-20 19:04:29.867759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.650 [2024-11-20 19:04:29.867831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.650 [2024-11-20 19:04:29.867844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.867851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.867857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.867872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.877791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.877849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.877862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.877870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.877876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.877890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.887827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.887883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.887896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.887903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.887909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.887923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.897839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.897896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.897910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.897918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.897924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.897938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.907799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.907858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.907872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.907879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.907885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.907900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.917928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.917996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.918010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.918017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.918024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.918038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.927933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.928000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.928013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.928021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.928027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.928041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.937953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.938003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.938018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.938027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.938035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.651 [2024-11-20 19:04:29.938052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.651 qpair failed and we were unable to recover it. 00:27:07.651 [2024-11-20 19:04:29.947914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.651 [2024-11-20 19:04:29.947980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.651 [2024-11-20 19:04:29.947997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.651 [2024-11-20 19:04:29.948005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.651 [2024-11-20 19:04:29.948011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.652 [2024-11-20 19:04:29.948025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.652 qpair failed and we were unable to recover it. 00:27:07.912 [2024-11-20 19:04:29.958013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.912 [2024-11-20 19:04:29.958071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.912 [2024-11-20 19:04:29.958085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.912 [2024-11-20 19:04:29.958093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.912 [2024-11-20 19:04:29.958099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:29.958114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:29.967962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:29.968018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:29.968032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:29.968039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:29.968046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:29.968060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:29.978061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:29.978116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:29.978130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:29.978137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:29.978144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:29.978158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:29.988112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:29.988170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:29.988184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:29.988191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:29.988205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:29.988220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:29.998147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:29.998209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:29.998223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:29.998230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:29.998237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:29.998251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:30.008195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:30.008254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:30.008268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:30.008275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:30.008281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:30.008295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:30.018131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:30.018194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:30.018217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:30.018225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:30.018232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:30.018249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:30.028245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:30.028310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:30.028328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:30.028338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:30.028346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:30.028363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:30.038291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:30.038368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:30.038390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:30.038401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:30.038410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:30.038431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:30.048281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:30.048346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:30.048366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:30.048374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:30.048380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.913 [2024-11-20 19:04:30.048397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-11-20 19:04:30.058311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.913 [2024-11-20 19:04:30.058368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.913 [2024-11-20 19:04:30.058385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.913 [2024-11-20 19:04:30.058393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.913 [2024-11-20 19:04:30.058399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.058416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.068297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.068385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.068402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.068410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.068417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.068433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.078318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.078387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.078405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.078412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.078418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.078433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.088318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.088380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.088395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.088402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.088409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.088423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.098417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.098484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.098498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.098505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.098511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.098525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.108381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.108476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.108503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.108510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.108517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.108532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.118400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.118499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.118515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.118523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.118534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.118550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.128459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.128518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.128533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.128541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.128547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.128563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.138559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.138640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.138655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.138662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.138669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.138683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.148552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.148605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.148619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.148626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.148633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.148647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.158583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.158641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.158654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.158661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.158668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.158683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.168578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.168661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.168676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.168683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.914 [2024-11-20 19:04:30.168690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.914 [2024-11-20 19:04:30.168704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-11-20 19:04:30.178571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.914 [2024-11-20 19:04:30.178629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.914 [2024-11-20 19:04:30.178642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.914 [2024-11-20 19:04:30.178650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.915 [2024-11-20 19:04:30.178656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.915 [2024-11-20 19:04:30.178670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-11-20 19:04:30.188610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.915 [2024-11-20 19:04:30.188666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.915 [2024-11-20 19:04:30.188680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.915 [2024-11-20 19:04:30.188687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.915 [2024-11-20 19:04:30.188693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.915 [2024-11-20 19:04:30.188707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-11-20 19:04:30.198689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.915 [2024-11-20 19:04:30.198743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.915 [2024-11-20 19:04:30.198757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.915 [2024-11-20 19:04:30.198764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.915 [2024-11-20 19:04:30.198770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.915 [2024-11-20 19:04:30.198784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-11-20 19:04:30.208732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.915 [2024-11-20 19:04:30.208818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.915 [2024-11-20 19:04:30.208835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.915 [2024-11-20 19:04:30.208842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.915 [2024-11-20 19:04:30.208848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.915 [2024-11-20 19:04:30.208862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-11-20 19:04:30.218809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.915 [2024-11-20 19:04:30.218872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.915 [2024-11-20 19:04:30.218887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.915 [2024-11-20 19:04:30.218894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.915 [2024-11-20 19:04:30.218900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.915 [2024-11-20 19:04:30.218915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-11-20 19:04:30.228718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.915 [2024-11-20 19:04:30.228773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.915 [2024-11-20 19:04:30.228787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.915 [2024-11-20 19:04:30.228794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.915 [2024-11-20 19:04:30.228800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:07.915 [2024-11-20 19:04:30.228815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.915 qpair failed and we were unable to recover it. 00:27:08.174 [2024-11-20 19:04:30.238823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.174 [2024-11-20 19:04:30.238881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.174 [2024-11-20 19:04:30.238895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.174 [2024-11-20 19:04:30.238902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.174 [2024-11-20 19:04:30.238908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.174 [2024-11-20 19:04:30.238922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.174 qpair failed and we were unable to recover it. 00:27:08.174 [2024-11-20 19:04:30.248779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.174 [2024-11-20 19:04:30.248836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.174 [2024-11-20 19:04:30.248849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.174 [2024-11-20 19:04:30.248860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.174 [2024-11-20 19:04:30.248866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.174 [2024-11-20 19:04:30.248880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.174 qpair failed and we were unable to recover it. 00:27:08.174 [2024-11-20 19:04:30.258876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.174 [2024-11-20 19:04:30.258958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.174 [2024-11-20 19:04:30.258972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.174 [2024-11-20 19:04:30.258979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.174 [2024-11-20 19:04:30.258985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.174 [2024-11-20 19:04:30.258999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.174 qpair failed and we were unable to recover it. 00:27:08.174 [2024-11-20 19:04:30.268912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.174 [2024-11-20 19:04:30.268971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.174 [2024-11-20 19:04:30.268986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.174 [2024-11-20 19:04:30.268993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.174 [2024-11-20 19:04:30.268999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.174 [2024-11-20 19:04:30.269014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.174 qpair failed and we were unable to recover it. 00:27:08.174 [2024-11-20 19:04:30.278858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.174 [2024-11-20 19:04:30.278915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.174 [2024-11-20 19:04:30.278929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.278936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.278943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.278958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.288944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.289028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.289043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.289050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.289057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.289074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.298920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.298975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.298989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.298996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.299003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.299017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.309054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.309133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.309147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.309154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.309160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.309175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.318980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.319048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.319062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.319069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.319075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.319090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.329082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.329138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.329153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.329160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.329166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.329181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.339131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.339185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.339199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.339211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.339218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.339232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.349063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.349115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.349128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.349135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.349142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.349155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.359168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.359223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.359237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.359244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.359250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.359265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.369189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.369254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.369268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.369275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.369282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.369296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.379221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.379278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.379291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.379302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.379308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.379323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.175 [2024-11-20 19:04:30.389169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.175 [2024-11-20 19:04:30.389229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.175 [2024-11-20 19:04:30.389244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.175 [2024-11-20 19:04:30.389251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.175 [2024-11-20 19:04:30.389258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.175 [2024-11-20 19:04:30.389273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.175 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.399266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.399324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.399337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.399344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.399351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.399365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.409291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.409346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.409359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.409366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.409372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.409387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.419318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.419376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.419389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.419397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.419404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.419421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.429343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.429402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.429416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.429424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.429430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.429445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.439383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.439441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.439454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.439461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.439467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.439482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.449406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.449487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.449500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.449507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.449513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.449527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.459434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.459488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.459502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.459509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.459515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.459530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.469460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.469516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.469530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.469537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.469543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.469558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.479500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.479556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.479569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.479577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.479584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.479599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.176 [2024-11-20 19:04:30.489519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.176 [2024-11-20 19:04:30.489578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.176 [2024-11-20 19:04:30.489591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.176 [2024-11-20 19:04:30.489599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.176 [2024-11-20 19:04:30.489605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.176 [2024-11-20 19:04:30.489619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.176 qpair failed and we were unable to recover it. 00:27:08.436 [2024-11-20 19:04:30.499545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.436 [2024-11-20 19:04:30.499602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.436 [2024-11-20 19:04:30.499615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.436 [2024-11-20 19:04:30.499623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.436 [2024-11-20 19:04:30.499629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.436 [2024-11-20 19:04:30.499643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.436 qpair failed and we were unable to recover it. 00:27:08.436 [2024-11-20 19:04:30.509576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.436 [2024-11-20 19:04:30.509632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.436 [2024-11-20 19:04:30.509649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.436 [2024-11-20 19:04:30.509656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.509663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.509677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.519575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.519628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.519642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.519648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.519655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.519670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.529725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.529792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.529805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.529813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.529819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.529833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.539710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.539770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.539784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.539791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.539798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.539812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.549735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.549788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.549802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.549809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.549819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.549833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.559756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.559814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.559827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.559834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.559841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.559855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.569796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.569854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.569868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.569875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.569881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.569895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.579791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.579841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.579854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.579862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.579867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.579882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.589814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.589868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.589882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.589889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.589895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.589910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.599833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.599890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.599904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.599911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.599917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.599932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.609908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.609961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.437 [2024-11-20 19:04:30.609975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.437 [2024-11-20 19:04:30.609982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.437 [2024-11-20 19:04:30.609989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.437 [2024-11-20 19:04:30.610004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.437 qpair failed and we were unable to recover it. 00:27:08.437 [2024-11-20 19:04:30.619910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.437 [2024-11-20 19:04:30.619971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.619986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.619993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.619999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.620013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.629960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.630026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.630040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.630047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.630053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.630068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.639963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.640020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.640037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.640044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.640051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.640065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.649999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.650053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.650067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.650074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.650080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.650094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.660063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.660124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.660138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.660145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.660151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.660166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.670039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.670103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.670135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.670143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.670149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.670173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.680088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.680146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.680161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.680168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.680178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.680193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.690119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.690174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.690189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.690196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.690207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.690223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.700131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.700187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.700206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.700214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.700221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.700236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.710171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.710227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.710241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.710250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.710256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.710271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.720209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.720274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.720288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.438 [2024-11-20 19:04:30.720296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.438 [2024-11-20 19:04:30.720303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.438 [2024-11-20 19:04:30.720317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.438 qpair failed and we were unable to recover it. 00:27:08.438 [2024-11-20 19:04:30.730239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.438 [2024-11-20 19:04:30.730296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.438 [2024-11-20 19:04:30.730311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.439 [2024-11-20 19:04:30.730318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.439 [2024-11-20 19:04:30.730325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.439 [2024-11-20 19:04:30.730341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.439 qpair failed and we were unable to recover it. 00:27:08.439 [2024-11-20 19:04:30.740264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.439 [2024-11-20 19:04:30.740327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.439 [2024-11-20 19:04:30.740341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.439 [2024-11-20 19:04:30.740349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.439 [2024-11-20 19:04:30.740355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.439 [2024-11-20 19:04:30.740370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.439 qpair failed and we were unable to recover it. 00:27:08.439 [2024-11-20 19:04:30.750289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.439 [2024-11-20 19:04:30.750348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.439 [2024-11-20 19:04:30.750362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.439 [2024-11-20 19:04:30.750369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.439 [2024-11-20 19:04:30.750376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.439 [2024-11-20 19:04:30.750391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.439 qpair failed and we were unable to recover it. 00:27:08.439 [2024-11-20 19:04:30.760295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.439 [2024-11-20 19:04:30.760349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.439 [2024-11-20 19:04:30.760363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.439 [2024-11-20 19:04:30.760370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.439 [2024-11-20 19:04:30.760377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.439 [2024-11-20 19:04:30.760392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.439 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.770350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.770405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.770424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.770432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.699 [2024-11-20 19:04:30.770438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.699 [2024-11-20 19:04:30.770453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.780332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.780386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.780400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.780407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.699 [2024-11-20 19:04:30.780413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.699 [2024-11-20 19:04:30.780428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.790417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.790484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.790499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.790507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.699 [2024-11-20 19:04:30.790514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.699 [2024-11-20 19:04:30.790530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.800433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.800488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.800503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.800510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.699 [2024-11-20 19:04:30.800516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.699 [2024-11-20 19:04:30.800532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.810487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.810545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.810559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.810570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.699 [2024-11-20 19:04:30.810576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.699 [2024-11-20 19:04:30.810590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.820513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.820562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.820576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.820583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.699 [2024-11-20 19:04:30.820589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.699 [2024-11-20 19:04:30.820604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.830534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.830596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.830610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.830617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.699 [2024-11-20 19:04:30.830623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.699 [2024-11-20 19:04:30.830637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-11-20 19:04:30.840549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.699 [2024-11-20 19:04:30.840601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.699 [2024-11-20 19:04:30.840615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.699 [2024-11-20 19:04:30.840622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.840628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.840642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.850574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.850625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.850639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.850646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.850652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.850670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.860605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.860662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.860676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.860684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.860691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.860706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.870618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.870674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.870687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.870694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.870701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.870716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.880659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.880715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.880729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.880737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.880743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.880757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.890682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.890751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.890765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.890772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.890779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.890794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.900726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.900787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.900801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.900808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.900814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.900828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.910783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.910842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.910856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.910863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.910870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.910884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.920782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.920841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.920855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.920862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.920868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.920883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.930808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.930863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.930877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.930884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.930890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.930905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.940798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.940853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.940866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.940877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.940883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.940897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.950860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.950916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.950930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.950937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.950944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.950958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.960945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.961002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.961016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.961023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.700 [2024-11-20 19:04:30.961030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.700 [2024-11-20 19:04:30.961045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-11-20 19:04:30.970914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.700 [2024-11-20 19:04:30.970970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.700 [2024-11-20 19:04:30.970984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.700 [2024-11-20 19:04:30.970990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.701 [2024-11-20 19:04:30.970996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.701 [2024-11-20 19:04:30.971011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-11-20 19:04:30.980954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.701 [2024-11-20 19:04:30.981006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.701 [2024-11-20 19:04:30.981020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.701 [2024-11-20 19:04:30.981027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.701 [2024-11-20 19:04:30.981033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.701 [2024-11-20 19:04:30.981051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-11-20 19:04:30.990972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.701 [2024-11-20 19:04:30.991027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.701 [2024-11-20 19:04:30.991041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.701 [2024-11-20 19:04:30.991048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.701 [2024-11-20 19:04:30.991054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.701 [2024-11-20 19:04:30.991069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-11-20 19:04:31.001048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.701 [2024-11-20 19:04:31.001101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.701 [2024-11-20 19:04:31.001115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.701 [2024-11-20 19:04:31.001122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.701 [2024-11-20 19:04:31.001129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.701 [2024-11-20 19:04:31.001144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-11-20 19:04:31.011040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.701 [2024-11-20 19:04:31.011113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.701 [2024-11-20 19:04:31.011127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.701 [2024-11-20 19:04:31.011135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.701 [2024-11-20 19:04:31.011141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.701 [2024-11-20 19:04:31.011155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-11-20 19:04:31.021055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.701 [2024-11-20 19:04:31.021109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.701 [2024-11-20 19:04:31.021123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.701 [2024-11-20 19:04:31.021130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.701 [2024-11-20 19:04:31.021137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.701 [2024-11-20 19:04:31.021152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.031096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.031153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.031167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.031175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.031181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.031196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.041130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.041187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.041203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.041211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.041217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.041232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.051155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.051215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.051229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.051236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.051244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.051258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.061191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.061259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.061274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.061281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.061297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.061314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.071215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.071266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.071283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.071291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.071297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.071313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.081261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.081333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.081348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.081355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.081361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.081376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.091264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.091320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.091334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.091342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.091349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.091363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.101297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.101350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.101364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.101371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.101378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.101393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.111343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.111406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.111420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.111427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.961 [2024-11-20 19:04:31.111436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.961 [2024-11-20 19:04:31.111451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.961 qpair failed and we were unable to recover it. 00:27:08.961 [2024-11-20 19:04:31.121356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.961 [2024-11-20 19:04:31.121411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.961 [2024-11-20 19:04:31.121425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.961 [2024-11-20 19:04:31.121432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.121438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.121452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.131379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.131460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.131475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.131482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.131488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.131503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.141413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.141466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.141479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.141486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.141494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.141508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.151443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.151509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.151523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.151530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.151536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.151550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.161472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.161530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.161544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.161552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.161558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.161574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.171501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.171563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.171577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.171584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.171590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.171604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.181525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.181578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.181592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.181599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.181606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.181621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.191530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.191596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.191610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.191617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.191623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.191637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.201586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.201644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.201661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.201669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.201675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.201690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.211562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.211616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.211630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.211637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.211643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.211657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.221582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.221640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.221653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.221661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.221667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.221682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.231692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.231750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.231763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.231770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.231777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.231791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.241745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.241802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.241817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.962 [2024-11-20 19:04:31.241825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.962 [2024-11-20 19:04:31.241835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.962 [2024-11-20 19:04:31.241851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.962 qpair failed and we were unable to recover it. 00:27:08.962 [2024-11-20 19:04:31.251747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.962 [2024-11-20 19:04:31.251818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.962 [2024-11-20 19:04:31.251832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.963 [2024-11-20 19:04:31.251839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.963 [2024-11-20 19:04:31.251845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.963 [2024-11-20 19:04:31.251859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.963 qpair failed and we were unable to recover it. 00:27:08.963 [2024-11-20 19:04:31.261759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.963 [2024-11-20 19:04:31.261817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.963 [2024-11-20 19:04:31.261830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.963 [2024-11-20 19:04:31.261838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.963 [2024-11-20 19:04:31.261844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.963 [2024-11-20 19:04:31.261859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.963 qpair failed and we were unable to recover it. 00:27:08.963 [2024-11-20 19:04:31.271792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.963 [2024-11-20 19:04:31.271849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.963 [2024-11-20 19:04:31.271863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.963 [2024-11-20 19:04:31.271870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.963 [2024-11-20 19:04:31.271877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.963 [2024-11-20 19:04:31.271892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.963 qpair failed and we were unable to recover it. 00:27:08.963 [2024-11-20 19:04:31.281822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.963 [2024-11-20 19:04:31.281877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.963 [2024-11-20 19:04:31.281891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.963 [2024-11-20 19:04:31.281897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.963 [2024-11-20 19:04:31.281904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:08.963 [2024-11-20 19:04:31.281918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.963 qpair failed and we were unable to recover it. 00:27:09.223 [2024-11-20 19:04:31.291829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.223 [2024-11-20 19:04:31.291882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.223 [2024-11-20 19:04:31.291896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.223 [2024-11-20 19:04:31.291903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.223 [2024-11-20 19:04:31.291909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.223 [2024-11-20 19:04:31.291923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.223 qpair failed and we were unable to recover it. 00:27:09.223 [2024-11-20 19:04:31.301872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.223 [2024-11-20 19:04:31.301934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.223 [2024-11-20 19:04:31.301949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.223 [2024-11-20 19:04:31.301956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.223 [2024-11-20 19:04:31.301962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.223 [2024-11-20 19:04:31.301977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.223 qpair failed and we were unable to recover it. 00:27:09.223 [2024-11-20 19:04:31.311918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.223 [2024-11-20 19:04:31.311968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.223 [2024-11-20 19:04:31.311981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.223 [2024-11-20 19:04:31.311988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.223 [2024-11-20 19:04:31.311995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.223 [2024-11-20 19:04:31.312010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.223 qpair failed and we were unable to recover it. 00:27:09.223 [2024-11-20 19:04:31.321942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.223 [2024-11-20 19:04:31.321997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.223 [2024-11-20 19:04:31.322011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.223 [2024-11-20 19:04:31.322018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.223 [2024-11-20 19:04:31.322024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.223 [2024-11-20 19:04:31.322039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.223 qpair failed and we were unable to recover it. 00:27:09.223 [2024-11-20 19:04:31.331977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.332036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.332053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.332059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.332066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.332081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.342004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.342052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.342066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.342073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.342080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.342095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.352015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.352069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.352083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.352090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.352097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.352111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.362054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.362110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.362125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.362133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.362140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.362154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.372083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.372140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.372155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.372166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.372172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.372187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.382117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.382172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.382188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.382197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.382207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.382222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.392160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.392220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.392234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.392242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.392248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.392263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.402176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.402240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.402254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.402262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.402268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.402283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.412188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.412248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.412262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.412270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.412276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.412294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.422158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.422216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.422231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.422237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.422243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.422260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.432231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.432288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.432302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.432309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.432316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.432331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.442294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.224 [2024-11-20 19:04:31.442351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.224 [2024-11-20 19:04:31.442365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.224 [2024-11-20 19:04:31.442372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.224 [2024-11-20 19:04:31.442378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.224 [2024-11-20 19:04:31.442393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.224 qpair failed and we were unable to recover it. 00:27:09.224 [2024-11-20 19:04:31.452251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.452308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.452322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.452329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.452335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.452350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.462339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.462395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.462409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.462416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.462422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.462438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.472332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.472409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.472423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.472429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.472436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.472451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.482319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.482376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.482390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.482397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.482403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.482417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.492410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.492473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.492487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.492494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.492500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.492515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.502467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.502537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.502551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.502563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.502570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.502584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.512449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.512498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.512512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.512519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.512525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.512539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.522452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.522507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.522520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.522527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.522533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.522548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.532555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.532640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.532654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.532661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.532667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.532681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.225 [2024-11-20 19:04:31.542532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.225 [2024-11-20 19:04:31.542583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.225 [2024-11-20 19:04:31.542597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.225 [2024-11-20 19:04:31.542604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.225 [2024-11-20 19:04:31.542610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.225 [2024-11-20 19:04:31.542627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.225 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 19:04:31.552560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.485 [2024-11-20 19:04:31.552611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.485 [2024-11-20 19:04:31.552625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.485 [2024-11-20 19:04:31.552632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.485 [2024-11-20 19:04:31.552639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.485 [2024-11-20 19:04:31.552653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.562629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.562685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.562699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.562706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.562712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.562726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.572579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.572640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.572654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.572661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.572668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.572682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.582616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.582666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.582679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.582686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.582692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.582707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.592723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.592792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.592808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.592815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.592822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.592838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.602681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.602756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.602770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.602777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.602784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.602798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.612763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.612819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.612832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.612839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.612845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.612860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.622830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.622884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.622898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.622904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.622911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.622925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.632745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.632843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.632860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.632867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.632873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.632888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.642833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.642886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.642900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.642907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.642914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.642929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.652853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.652906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.652920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.652927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.652933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.652948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 19:04:31.662814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.486 [2024-11-20 19:04:31.662911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.486 [2024-11-20 19:04:31.662925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.486 [2024-11-20 19:04:31.662932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.486 [2024-11-20 19:04:31.662938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.486 [2024-11-20 19:04:31.662953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.672947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.673022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.673036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.673044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.673053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.673067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.682929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.682983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.682997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.683004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.683011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.683025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.693047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.693102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.693117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.693125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.693131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.693146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.703003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.703058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.703073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.703080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.703087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.703101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.712966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.713033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.713048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.713055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.713061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.713076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.723049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.723135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.723149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.723158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.723164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.723178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.733027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.733088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.733102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.733109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.733115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.733130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.743124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.743176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.743190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.743197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.743208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.743223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.753153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.753216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.753230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.753237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.753243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.753259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.763248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.763343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.487 [2024-11-20 19:04:31.763363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.487 [2024-11-20 19:04:31.763370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.487 [2024-11-20 19:04:31.763376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.487 [2024-11-20 19:04:31.763391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 19:04:31.773145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.487 [2024-11-20 19:04:31.773208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.488 [2024-11-20 19:04:31.773223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.488 [2024-11-20 19:04:31.773229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.488 [2024-11-20 19:04:31.773236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.488 [2024-11-20 19:04:31.773251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 19:04:31.783243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.488 [2024-11-20 19:04:31.783314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.488 [2024-11-20 19:04:31.783328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.488 [2024-11-20 19:04:31.783335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.488 [2024-11-20 19:04:31.783341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.488 [2024-11-20 19:04:31.783357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 19:04:31.793267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.488 [2024-11-20 19:04:31.793320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.488 [2024-11-20 19:04:31.793334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.488 [2024-11-20 19:04:31.793341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.488 [2024-11-20 19:04:31.793347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.488 [2024-11-20 19:04:31.793362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 19:04:31.803294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.488 [2024-11-20 19:04:31.803351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.488 [2024-11-20 19:04:31.803365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.488 [2024-11-20 19:04:31.803372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.488 [2024-11-20 19:04:31.803381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.488 [2024-11-20 19:04:31.803396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.750 [2024-11-20 19:04:31.813320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.750 [2024-11-20 19:04:31.813370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.750 [2024-11-20 19:04:31.813384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.750 [2024-11-20 19:04:31.813390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.750 [2024-11-20 19:04:31.813396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.750 [2024-11-20 19:04:31.813412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.750 qpair failed and we were unable to recover it. 00:27:09.750 [2024-11-20 19:04:31.823346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.750 [2024-11-20 19:04:31.823397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.750 [2024-11-20 19:04:31.823410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.750 [2024-11-20 19:04:31.823417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.750 [2024-11-20 19:04:31.823423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.750 [2024-11-20 19:04:31.823438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.750 qpair failed and we were unable to recover it. 00:27:09.750 [2024-11-20 19:04:31.833408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.750 [2024-11-20 19:04:31.833461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.750 [2024-11-20 19:04:31.833475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.750 [2024-11-20 19:04:31.833481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.750 [2024-11-20 19:04:31.833488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.750 [2024-11-20 19:04:31.833502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.750 qpair failed and we were unable to recover it. 00:27:09.750 [2024-11-20 19:04:31.843402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.750 [2024-11-20 19:04:31.843503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.750 [2024-11-20 19:04:31.843517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.750 [2024-11-20 19:04:31.843524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.750 [2024-11-20 19:04:31.843530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.750 [2024-11-20 19:04:31.843544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.750 qpair failed and we were unable to recover it. 00:27:09.750 [2024-11-20 19:04:31.853437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.750 [2024-11-20 19:04:31.853492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.750 [2024-11-20 19:04:31.853505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.750 [2024-11-20 19:04:31.853512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.750 [2024-11-20 19:04:31.853519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.750 [2024-11-20 19:04:31.853533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.750 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.863460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.863514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.863528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.863536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.863542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.863557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.873479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.873535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.873549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.873557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.873563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.873578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.883564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.883616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.883631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.883638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.883646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.883661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.893551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.893612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.893626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.893633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.893640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.893654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.903560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.903622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.903636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.903643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.903650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.903665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.913505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.913574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.913588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.913596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.913601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.913616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.923620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.923676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.923689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.923696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.923703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.923717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.933641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.933695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.933710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.933720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.933726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.933741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.943680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.943733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.943746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.943753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.943760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.943775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.953713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.953785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.953800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.953807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.953813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.953828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.963792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.963849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.963863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.963870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.963876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.963891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.973773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.973825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.973839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.973846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.973852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.973870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.983810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.751 [2024-11-20 19:04:31.983884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.751 [2024-11-20 19:04:31.983898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.751 [2024-11-20 19:04:31.983905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.751 [2024-11-20 19:04:31.983911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.751 [2024-11-20 19:04:31.983927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.751 qpair failed and we were unable to recover it. 00:27:09.751 [2024-11-20 19:04:31.993808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:31.993862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:31.993876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:31.993883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:31.993889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:31.993904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.003856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.003909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.003923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.003930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.003936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.003952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.013891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.013965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.013979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.013986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.013992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.014007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.023893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.023959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.023973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.023980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.023986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.024001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.033929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.033982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.033996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.034003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.034010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.034024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.043972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.044036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.044050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.044058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.044064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.044079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.053993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.054049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.054064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.054071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.054077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.054091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.064016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.064075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.064089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.064099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.064105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.064119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:09.752 [2024-11-20 19:04:32.074060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.752 [2024-11-20 19:04:32.074129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.752 [2024-11-20 19:04:32.074144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.752 [2024-11-20 19:04:32.074151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.752 [2024-11-20 19:04:32.074157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:09.752 [2024-11-20 19:04:32.074172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:09.752 qpair failed and we were unable to recover it. 00:27:10.011 [2024-11-20 19:04:32.084132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.011 [2024-11-20 19:04:32.084226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.011 [2024-11-20 19:04:32.084241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.011 [2024-11-20 19:04:32.084248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.011 [2024-11-20 19:04:32.084254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.011 [2024-11-20 19:04:32.084268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.011 qpair failed and we were unable to recover it. 00:27:10.011 [2024-11-20 19:04:32.094078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.011 [2024-11-20 19:04:32.094142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.011 [2024-11-20 19:04:32.094156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.011 [2024-11-20 19:04:32.094163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.011 [2024-11-20 19:04:32.094169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.011 [2024-11-20 19:04:32.094185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.011 qpair failed and we were unable to recover it. 00:27:10.011 [2024-11-20 19:04:32.104141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.011 [2024-11-20 19:04:32.104199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.011 [2024-11-20 19:04:32.104217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.011 [2024-11-20 19:04:32.104224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.011 [2024-11-20 19:04:32.104231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.011 [2024-11-20 19:04:32.104248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.011 qpair failed and we were unable to recover it. 00:27:10.011 [2024-11-20 19:04:32.114167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.114222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.114235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.114243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.114250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.114264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.124230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.124332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.124348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.124355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.124361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.124376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.134228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.134333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.134348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.134355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.134362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.134377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.144250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.144306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.144319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.144326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.144333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.144347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.154280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.154333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.154348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.154355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.154361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.154376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.164325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.164383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.164396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.164403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.164410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.164424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.174359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.174415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.174429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.174436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.174442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.174457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.184372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.184431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.184445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.184452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.184459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.184473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.194388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.194441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.194457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.194465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.194471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.194486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.204430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.204496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.204511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.204518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.204524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.204538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.214372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.214443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.214457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.214465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.214471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.214486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.224476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.224526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.224540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.224547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.224553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.224568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.234473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.012 [2024-11-20 19:04:32.234541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.012 [2024-11-20 19:04:32.234554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.012 [2024-11-20 19:04:32.234561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.012 [2024-11-20 19:04:32.234573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.012 [2024-11-20 19:04:32.234588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.012 qpair failed and we were unable to recover it. 00:27:10.012 [2024-11-20 19:04:32.244457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.244536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.244550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.244558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.244564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.244579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.254552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.254609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.254623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.254631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.254637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.254652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.264571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.264620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.264633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.264640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.264647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.264662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.274532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.274587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.274600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.274608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.274614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.274628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.284640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.284695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.284709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.284716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.284723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.284737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.294650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.294707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.294720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.294728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.294734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.294748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.304719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.304782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.304796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.304803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.304809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.304824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.314646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.314702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.314716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.314722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.314728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.314743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.324757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.324812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.324829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.324837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.324844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.324859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.013 [2024-11-20 19:04:32.334767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.013 [2024-11-20 19:04:32.334820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.013 [2024-11-20 19:04:32.334834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.013 [2024-11-20 19:04:32.334842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.013 [2024-11-20 19:04:32.334848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.013 [2024-11-20 19:04:32.334863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.013 qpair failed and we were unable to recover it. 00:27:10.273 [2024-11-20 19:04:32.344837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.273 [2024-11-20 19:04:32.344895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.273 [2024-11-20 19:04:32.344909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.273 [2024-11-20 19:04:32.344916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.273 [2024-11-20 19:04:32.344923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.273 [2024-11-20 19:04:32.344937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.273 qpair failed and we were unable to recover it. 00:27:10.273 [2024-11-20 19:04:32.354841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.273 [2024-11-20 19:04:32.354893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.273 [2024-11-20 19:04:32.354906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.273 [2024-11-20 19:04:32.354913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.273 [2024-11-20 19:04:32.354920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.273 [2024-11-20 19:04:32.354934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.273 qpair failed and we were unable to recover it. 00:27:10.273 [2024-11-20 19:04:32.364858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.273 [2024-11-20 19:04:32.364912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.273 [2024-11-20 19:04:32.364926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.273 [2024-11-20 19:04:32.364933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.273 [2024-11-20 19:04:32.364942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.273 [2024-11-20 19:04:32.364958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.273 qpair failed and we were unable to recover it. 00:27:10.273 [2024-11-20 19:04:32.374900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.273 [2024-11-20 19:04:32.374989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.273 [2024-11-20 19:04:32.375003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.375010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.375016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.375031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.384895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.384951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.384965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.384973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.384979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.384994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.394954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.395009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.395023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.395030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.395037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.395051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.405015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.405116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.405131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.405138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.405144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.405159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.415009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.415066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.415080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.415088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.415094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.415108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.425055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.425137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.425151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.425158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.425164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.425179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.435050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.435104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.435118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.435125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.435132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.435146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.445087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.445144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.445159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.445166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.445172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.445186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.455135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.455195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.455213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.455220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.455227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.455242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.465127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.465188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.465207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.465215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.465221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.465237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.475174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.475239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.475253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.475261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.475267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.475281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.485209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.485266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.485280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.485287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.485293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.485308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.495186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.495248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.274 [2024-11-20 19:04:32.495261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.274 [2024-11-20 19:04:32.495272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.274 [2024-11-20 19:04:32.495278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.274 [2024-11-20 19:04:32.495292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.274 qpair failed and we were unable to recover it. 00:27:10.274 [2024-11-20 19:04:32.505284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.274 [2024-11-20 19:04:32.505355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.505370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.505377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.505384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.505398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.515283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.515336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.515349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.515356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.515363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.515378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.525313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.525371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.525385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.525392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.525398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.525412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.535436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.535507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.535521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.535528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.535535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.535552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.545424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.545476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.545490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.545497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.545503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.545518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.555439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.555493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.555507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.555514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.555521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.555535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.565486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.565543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.565557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.565564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.565571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.565585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.575467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.575525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.575539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.575546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.575553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.575568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.585486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.585540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.585554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.585561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.585567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.585581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.275 [2024-11-20 19:04:32.595526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.275 [2024-11-20 19:04:32.595581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.275 [2024-11-20 19:04:32.595594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.275 [2024-11-20 19:04:32.595601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.275 [2024-11-20 19:04:32.595608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.275 [2024-11-20 19:04:32.595622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.275 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.605535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.605589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.605603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.605610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.605617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.605631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.615577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.615638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.615651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.615659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.615666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.615680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.625612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.625683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.625700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.625708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.625714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.625728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.635575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.635635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.635649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.635656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.635662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.635677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.645671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.645732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.645746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.645753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.645760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.645774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.655696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.655780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.655794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.655801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.655807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.655821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.665719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.665774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.665787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.665794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.665801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.665819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.675746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.675800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.675814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.675821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.675827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.675842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.685785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.685841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.685855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.685862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.685868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.685883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.695800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.695857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.695871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.695879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.695886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.695900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.705829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.705884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.705904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.536 [2024-11-20 19:04:32.705912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.536 [2024-11-20 19:04:32.705918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.536 [2024-11-20 19:04:32.705937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.536 qpair failed and we were unable to recover it. 00:27:10.536 [2024-11-20 19:04:32.715907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.536 [2024-11-20 19:04:32.715962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.536 [2024-11-20 19:04:32.715976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.715984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.715990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.716005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.725899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.725955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.725970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.725977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.725984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.725998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.735920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.735976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.735990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.735998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.736004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.736019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.745978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.746030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.746044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.746051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.746058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.746072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.755896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.755956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.755974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.755981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.755987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.756002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.766014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.766110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.766125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.766132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.766138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.766154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.776028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.776085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.776099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.776106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.776112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.776127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.786027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.786092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.786107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.786114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.786120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.786134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.796080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.796170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.796185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.796192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.796206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.796221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.806199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.806259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.806275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.806284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.806292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.806308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.816146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.816204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.816218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.816226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.816233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.816248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.826126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.826176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.826190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.826197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.826208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.826224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.836195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.836260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.836274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.537 [2024-11-20 19:04:32.836281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.537 [2024-11-20 19:04:32.836287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.537 [2024-11-20 19:04:32.836302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.537 qpair failed and we were unable to recover it. 00:27:10.537 [2024-11-20 19:04:32.846265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.537 [2024-11-20 19:04:32.846369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.537 [2024-11-20 19:04:32.846383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.538 [2024-11-20 19:04:32.846390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.538 [2024-11-20 19:04:32.846396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.538 [2024-11-20 19:04:32.846410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.538 qpair failed and we were unable to recover it. 00:27:10.538 [2024-11-20 19:04:32.856274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.538 [2024-11-20 19:04:32.856336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.538 [2024-11-20 19:04:32.856350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.538 [2024-11-20 19:04:32.856358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.538 [2024-11-20 19:04:32.856364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.538 [2024-11-20 19:04:32.856378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.538 qpair failed and we were unable to recover it. 00:27:10.797 [2024-11-20 19:04:32.866306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.797 [2024-11-20 19:04:32.866368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.797 [2024-11-20 19:04:32.866382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.797 [2024-11-20 19:04:32.866389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.797 [2024-11-20 19:04:32.866396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.866424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.876346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.876402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.876415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.876422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.876429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.876444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.886352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.886409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.886426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.886433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.886440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.886454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.896432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.896520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.896533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.896540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.896546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.896561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.906381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.906456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.906470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.906478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.906484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.906498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.916424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.916479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.916493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.916500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.916507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.916522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.926452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.926527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.926542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.926549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.926558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.926573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.936488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.936577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.936592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.936599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.936605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.936621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.946460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.946514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.946528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.946536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.946542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.946557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.956582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.956667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.956681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.956688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.956694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.956708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.966620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.966674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.966688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.966695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.966701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.966715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.976601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.976657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.976670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.976678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.976685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.976699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.986638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.986709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.986723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.986730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.798 [2024-11-20 19:04:32.986736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.798 [2024-11-20 19:04:32.986751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.798 qpair failed and we were unable to recover it. 00:27:10.798 [2024-11-20 19:04:32.996597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.798 [2024-11-20 19:04:32.996654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.798 [2024-11-20 19:04:32.996668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.798 [2024-11-20 19:04:32.996675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:32.996681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:32.996695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.006700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.006755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.006769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.006776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.006783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.006797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.016662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.016719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.016733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.016740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.016746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.016761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.026736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.026791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.026804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.026811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.026816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.026831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.036788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.036841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.036855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.036862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.036868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.036883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.046738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.046834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.046848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.046855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.046862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.046877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.056839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.056914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.056928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.056938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.056944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.056959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.066815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.066871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.066884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.066891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.066898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.066912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.076870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.076969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.076983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.076991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.076998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.077013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.086956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.087011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.087026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.087033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.087040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.087055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.096931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.096986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.097000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.097007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.097014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.097031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.106981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.107037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.107051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.107059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.107065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.107080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:10.799 [2024-11-20 19:04:33.117035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.799 [2024-11-20 19:04:33.117144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.799 [2024-11-20 19:04:33.117158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.799 [2024-11-20 19:04:33.117165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.799 [2024-11-20 19:04:33.117171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:10.799 [2024-11-20 19:04:33.117185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:10.799 qpair failed and we were unable to recover it. 00:27:11.059 [2024-11-20 19:04:33.127025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.059 [2024-11-20 19:04:33.127096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.059 [2024-11-20 19:04:33.127111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.059 [2024-11-20 19:04:33.127119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.059 [2024-11-20 19:04:33.127125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.059 [2024-11-20 19:04:33.127139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.059 qpair failed and we were unable to recover it. 00:27:11.059 [2024-11-20 19:04:33.137066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.059 [2024-11-20 19:04:33.137122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.059 [2024-11-20 19:04:33.137136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.059 [2024-11-20 19:04:33.137142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.059 [2024-11-20 19:04:33.137149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.059 [2024-11-20 19:04:33.137164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.147088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.147143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.147157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.147164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.147170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.147185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.157103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.157156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.157169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.157177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.157183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.157197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.167136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.167222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.167236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.167243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.167250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.167264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.177124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.177178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.177191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.177198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.177208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.177223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.187195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.187254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.187270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.187277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.187283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.187297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.197248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.197297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.197311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.197318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.197324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.197339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.207290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.207367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.207383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.207392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.207400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.207418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.217284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.217338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.217352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.217359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.217365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.217380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.227358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.227411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.227425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.227432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.227438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.227455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.237347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.237401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.237415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.237422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.237428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.237442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.247397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.247470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.247484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.247491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.247497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.247511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.257346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.257406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.257420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.257428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.257434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.060 [2024-11-20 19:04:33.257449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.060 qpair failed and we were unable to recover it. 00:27:11.060 [2024-11-20 19:04:33.267436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.060 [2024-11-20 19:04:33.267491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.060 [2024-11-20 19:04:33.267504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.060 [2024-11-20 19:04:33.267511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.060 [2024-11-20 19:04:33.267517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.267532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.277452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.277539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.277553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.277560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.277566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.277580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.287499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.287560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.287575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.287583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.287589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.287603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.297511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.297567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.297581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.297588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.297595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.297609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.307532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.307588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.307602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.307609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.307616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.307630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.317559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.317614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.317630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.317637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.317643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.317658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.327589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.327655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.327668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.327675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.327682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.327696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.337622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.337678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.337692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.337698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.337705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.337720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.347624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.347673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.347686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.347693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.347699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.347714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.357699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.357762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.357776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.357784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.357793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.357807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.367719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.367781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.367795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.367802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.367809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.367823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.061 [2024-11-20 19:04:33.377739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.061 [2024-11-20 19:04:33.377794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.061 [2024-11-20 19:04:33.377807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.061 [2024-11-20 19:04:33.377814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.061 [2024-11-20 19:04:33.377820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.061 [2024-11-20 19:04:33.377835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.061 qpair failed and we were unable to recover it. 00:27:11.321 [2024-11-20 19:04:33.387776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.321 [2024-11-20 19:04:33.387834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.321 [2024-11-20 19:04:33.387847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.321 [2024-11-20 19:04:33.387855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.321 [2024-11-20 19:04:33.387861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.321 [2024-11-20 19:04:33.387875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.321 qpair failed and we were unable to recover it. 00:27:11.321 [2024-11-20 19:04:33.397817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.321 [2024-11-20 19:04:33.397871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.321 [2024-11-20 19:04:33.397885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.321 [2024-11-20 19:04:33.397891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.321 [2024-11-20 19:04:33.397898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.321 [2024-11-20 19:04:33.397912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.321 qpair failed and we were unable to recover it. 00:27:11.321 [2024-11-20 19:04:33.407830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.321 [2024-11-20 19:04:33.407885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.407898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.407906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.407912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.407927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.417863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.417921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.417934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.417941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.417947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.417961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.427931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.428003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.428017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.428024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.428030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.428045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.437919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.437971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.437984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.437991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.437998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.438012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.447943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.448007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.448023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.448031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.448038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.448052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.457986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.458046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.458060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.458066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.458072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.458087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.468003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.468055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.468068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.468075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.468082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.468096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.478042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.478091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.478105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.478112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.478119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.478134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.488062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.488155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.488169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.488179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.488185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.488200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.498104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.498184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.498198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.498209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.498215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.498230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.508127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.508231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.508245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.508252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.508258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.508272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.518150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.518212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.518226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.518234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.518240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.518254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.528224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.528284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.322 [2024-11-20 19:04:33.528297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.322 [2024-11-20 19:04:33.528304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.322 [2024-11-20 19:04:33.528311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.322 [2024-11-20 19:04:33.528325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.322 qpair failed and we were unable to recover it. 00:27:11.322 [2024-11-20 19:04:33.538212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.322 [2024-11-20 19:04:33.538285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.538299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.538307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.538313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.538327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.548214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.548280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.548294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.548302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.548308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.548322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.558269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.558324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.558338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.558345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.558351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.558365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.568315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.568368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.568382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.568388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.568395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.568410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.578348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.578410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.578423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.578430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.578436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.578451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.588331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.588386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.588399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.588406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.588413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.588427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.598415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.598515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.598529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.598536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.598542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.598556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.608419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.608475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.608489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.608496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.608502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.608516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.618468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.618547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.618561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.618572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.618578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.618592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.628479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.628531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.628545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.628551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.628558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.628572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.323 [2024-11-20 19:04:33.638502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.323 [2024-11-20 19:04:33.638551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.323 [2024-11-20 19:04:33.638564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.323 [2024-11-20 19:04:33.638571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.323 [2024-11-20 19:04:33.638577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.323 [2024-11-20 19:04:33.638593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.323 qpair failed and we were unable to recover it. 00:27:11.583 [2024-11-20 19:04:33.648562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.583 [2024-11-20 19:04:33.648663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.583 [2024-11-20 19:04:33.648677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.583 [2024-11-20 19:04:33.648684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.583 [2024-11-20 19:04:33.648691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.583 [2024-11-20 19:04:33.648705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.583 qpair failed and we were unable to recover it. 00:27:11.583 [2024-11-20 19:04:33.658564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.583 [2024-11-20 19:04:33.658616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.583 [2024-11-20 19:04:33.658630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.583 [2024-11-20 19:04:33.658637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.658643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.658660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.668640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.668745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.668759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.668766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.668773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.668787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.678628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.678684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.678699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.678706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.678713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.678728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.688648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.688717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.688730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.688738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.688744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.688759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.698631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.698688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.698704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.698711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.698717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.698732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.708705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.708765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.708778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.708787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.708794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.708808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.718753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.718811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.718825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.718832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.718838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.718852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.728771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.728870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.728884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.728891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.728896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.728911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.738787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.738845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.738858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.738865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.738872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.738887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.748843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.748902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.748920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.748928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.748934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.748948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.758892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.758993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.759006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.759013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.759020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.759034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.768897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.769003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.584 [2024-11-20 19:04:33.769017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.584 [2024-11-20 19:04:33.769024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.584 [2024-11-20 19:04:33.769031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.584 [2024-11-20 19:04:33.769046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.584 qpair failed and we were unable to recover it. 00:27:11.584 [2024-11-20 19:04:33.778914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.584 [2024-11-20 19:04:33.778973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.778987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.778995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.779001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.779014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.788926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.788997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.789012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.789019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.789031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.789047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.798949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.799005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.799018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.799025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.799032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.799048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.808990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.809047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.809060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.809067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.809073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.809089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.819004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.819078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.819092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.819099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.819105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.819120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.829045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.829101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.829115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.829121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.829128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.829143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.839118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.839220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.839234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.839241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.839247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.839262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.849065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.849157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.849171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.849178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.849184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.849198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.859122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.859189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.859205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.859212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.859219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.859234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.869075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.869133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.869147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.869154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.869161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.869175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.879212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.879264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.879280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.879287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.879294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.879309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.889245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.889341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.889354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.889361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.889367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.889383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.585 [2024-11-20 19:04:33.899242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.585 [2024-11-20 19:04:33.899300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.585 [2024-11-20 19:04:33.899313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.585 [2024-11-20 19:04:33.899321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.585 [2024-11-20 19:04:33.899327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.585 [2024-11-20 19:04:33.899342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.585 qpair failed and we were unable to recover it. 00:27:11.845 [2024-11-20 19:04:33.909196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.845 [2024-11-20 19:04:33.909251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.909265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.909272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.909278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.909293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.919298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.919353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.919367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.919373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.919383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.919398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.929339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.929396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.929409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.929416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.929423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.929437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.939363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.939433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.939447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.939454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.939461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.939476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.949396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.949450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.949464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.949471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.949478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.949493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.959419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.959473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.959487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.959494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.959500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.959514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.969358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.969414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.969427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.969434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.969440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.969455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.979465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.979529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.979542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.979549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.979556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.979570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.989577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.989632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.989645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.989653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.989659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.989673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:33.999520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:33.999577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:33.999590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:33.999597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:33.999604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:33.999618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:34.009569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:34.009625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:34.009642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:34.009649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:34.009656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:34.009670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:34.019514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:34.019599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:34.019613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:34.019620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:34.019627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:34.019641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.846 [2024-11-20 19:04:34.029659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.846 [2024-11-20 19:04:34.029715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.846 [2024-11-20 19:04:34.029728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.846 [2024-11-20 19:04:34.029735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.846 [2024-11-20 19:04:34.029741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.846 [2024-11-20 19:04:34.029755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.846 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.039638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.039729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.039744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.039751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.039757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.039773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.049677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.049734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.049748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.049758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.049765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.049779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.059709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.059786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.059800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.059808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.059814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.059828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.069726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.069789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.069802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.069810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.069816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.069830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.079765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.079819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.079832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.079839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.079845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.079860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.089795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.089860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.089873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.089881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.089887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.089903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.099824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.099882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.099896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.099903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.099909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.099923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.109856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.109907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.109921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.109928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.109935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.109950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.119943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.120041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.120055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.120062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.120068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.120082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.129916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.129970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.129983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.129990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.129995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.130010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.139952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.140033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.140046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.140053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.140059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.140073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.149951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.150006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.150019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.150026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.150033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.150047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:11.847 [2024-11-20 19:04:34.160018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.847 [2024-11-20 19:04:34.160069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.847 [2024-11-20 19:04:34.160083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.847 [2024-11-20 19:04:34.160091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.847 [2024-11-20 19:04:34.160097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:11.847 [2024-11-20 19:04:34.160111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.847 qpair failed and we were unable to recover it. 00:27:12.107 [2024-11-20 19:04:34.170012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.107 [2024-11-20 19:04:34.170073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.107 [2024-11-20 19:04:34.170086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.107 [2024-11-20 19:04:34.170093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.107 [2024-11-20 19:04:34.170099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.107 [2024-11-20 19:04:34.170113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.107 qpair failed and we were unable to recover it. 00:27:12.107 [2024-11-20 19:04:34.180063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.107 [2024-11-20 19:04:34.180143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.107 [2024-11-20 19:04:34.180157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.107 [2024-11-20 19:04:34.180167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.107 [2024-11-20 19:04:34.180173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.107 [2024-11-20 19:04:34.180187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.107 qpair failed and we were unable to recover it. 00:27:12.107 [2024-11-20 19:04:34.190027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.107 [2024-11-20 19:04:34.190090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.107 [2024-11-20 19:04:34.190104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.107 [2024-11-20 19:04:34.190112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.107 [2024-11-20 19:04:34.190118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.107 [2024-11-20 19:04:34.190132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.107 qpair failed and we were unable to recover it. 00:27:12.107 [2024-11-20 19:04:34.200099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.107 [2024-11-20 19:04:34.200174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.107 [2024-11-20 19:04:34.200188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.107 [2024-11-20 19:04:34.200195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.107 [2024-11-20 19:04:34.200205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.107 [2024-11-20 19:04:34.200221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.107 qpair failed and we were unable to recover it. 00:27:12.107 [2024-11-20 19:04:34.210126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.107 [2024-11-20 19:04:34.210182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.107 [2024-11-20 19:04:34.210195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.107 [2024-11-20 19:04:34.210207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.107 [2024-11-20 19:04:34.210213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.107 [2024-11-20 19:04:34.210229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.107 qpair failed and we were unable to recover it. 00:27:12.107 [2024-11-20 19:04:34.220179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.107 [2024-11-20 19:04:34.220243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.107 [2024-11-20 19:04:34.220257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.107 [2024-11-20 19:04:34.220264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.107 [2024-11-20 19:04:34.220270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.107 [2024-11-20 19:04:34.220289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.107 qpair failed and we were unable to recover it. 00:27:12.107 [2024-11-20 19:04:34.230234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.107 [2024-11-20 19:04:34.230292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.107 [2024-11-20 19:04:34.230306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.107 [2024-11-20 19:04:34.230314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.107 [2024-11-20 19:04:34.230320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.107 [2024-11-20 19:04:34.230335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.107 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.240175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.240236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.240250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.240257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.240264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.240279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.250254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.250351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.250365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.250372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.250378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.250392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.260233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.260333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.260347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.260354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.260361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.260376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.270283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.270340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.270354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.270361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.270367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.270382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.280265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.280339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.280355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.280362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.280368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.280383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.290292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.290350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.290364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.290371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.290377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.290391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.300366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.300420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.300434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.300441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.300448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.300462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.310368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.310434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.310451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.310458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.310464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.310479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.320387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.320438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.320452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.320459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.320466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.320481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.330484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.330539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.330554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.330561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.330567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.330582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.340439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.340494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.340507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.340514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.340521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.340536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.350532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.350586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.350599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.350607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.350617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.350631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.108 [2024-11-20 19:04:34.360488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.108 [2024-11-20 19:04:34.360542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.108 [2024-11-20 19:04:34.360555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.108 [2024-11-20 19:04:34.360562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.108 [2024-11-20 19:04:34.360568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.108 [2024-11-20 19:04:34.360583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.108 qpair failed and we were unable to recover it. 00:27:12.109 [2024-11-20 19:04:34.370590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.109 [2024-11-20 19:04:34.370643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.109 [2024-11-20 19:04:34.370657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.109 [2024-11-20 19:04:34.370664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.109 [2024-11-20 19:04:34.370670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.109 [2024-11-20 19:04:34.370685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.109 qpair failed and we were unable to recover it. 00:27:12.109 [2024-11-20 19:04:34.380625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.109 [2024-11-20 19:04:34.380711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.109 [2024-11-20 19:04:34.380724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.109 [2024-11-20 19:04:34.380731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.109 [2024-11-20 19:04:34.380737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.109 [2024-11-20 19:04:34.380751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.109 qpair failed and we were unable to recover it. 00:27:12.109 [2024-11-20 19:04:34.390603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.109 [2024-11-20 19:04:34.390655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.109 [2024-11-20 19:04:34.390669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.109 [2024-11-20 19:04:34.390676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.109 [2024-11-20 19:04:34.390683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.109 [2024-11-20 19:04:34.390697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.109 qpair failed and we were unable to recover it. 00:27:12.109 [2024-11-20 19:04:34.400591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.109 [2024-11-20 19:04:34.400656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.109 [2024-11-20 19:04:34.400670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.109 [2024-11-20 19:04:34.400677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.109 [2024-11-20 19:04:34.400683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.109 [2024-11-20 19:04:34.400697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.109 qpair failed and we were unable to recover it. 00:27:12.109 [2024-11-20 19:04:34.410657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.109 [2024-11-20 19:04:34.410719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.109 [2024-11-20 19:04:34.410733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.109 [2024-11-20 19:04:34.410740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.109 [2024-11-20 19:04:34.410746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.109 [2024-11-20 19:04:34.410760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.109 qpair failed and we were unable to recover it. 00:27:12.109 [2024-11-20 19:04:34.420702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.109 [2024-11-20 19:04:34.420800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.109 [2024-11-20 19:04:34.420816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.109 [2024-11-20 19:04:34.420823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.109 [2024-11-20 19:04:34.420831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.109 [2024-11-20 19:04:34.420846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.109 qpair failed and we were unable to recover it. 00:27:12.109 [2024-11-20 19:04:34.430743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.109 [2024-11-20 19:04:34.430799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.109 [2024-11-20 19:04:34.430812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.109 [2024-11-20 19:04:34.430819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.109 [2024-11-20 19:04:34.430826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.109 [2024-11-20 19:04:34.430841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.109 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.440715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.440802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.440819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.369 [2024-11-20 19:04:34.440825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.369 [2024-11-20 19:04:34.440831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.369 [2024-11-20 19:04:34.440845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.369 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.450745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.450821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.450835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.369 [2024-11-20 19:04:34.450842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.369 [2024-11-20 19:04:34.450848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.369 [2024-11-20 19:04:34.450862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.369 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.460872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.460924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.460937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.369 [2024-11-20 19:04:34.460944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.369 [2024-11-20 19:04:34.460950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.369 [2024-11-20 19:04:34.460965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.369 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.470863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.470916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.470929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.369 [2024-11-20 19:04:34.470936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.369 [2024-11-20 19:04:34.470942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.369 [2024-11-20 19:04:34.470956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.369 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.480858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.480936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.480950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.369 [2024-11-20 19:04:34.480957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.369 [2024-11-20 19:04:34.480968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.369 [2024-11-20 19:04:34.480982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.369 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.490925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.490994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.491007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.369 [2024-11-20 19:04:34.491015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.369 [2024-11-20 19:04:34.491021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.369 [2024-11-20 19:04:34.491036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.369 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.500965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.501047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.501061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.369 [2024-11-20 19:04:34.501068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.369 [2024-11-20 19:04:34.501074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.369 [2024-11-20 19:04:34.501088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.369 qpair failed and we were unable to recover it. 00:27:12.369 [2024-11-20 19:04:34.510990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.369 [2024-11-20 19:04:34.511074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.369 [2024-11-20 19:04:34.511087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.511094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.511100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.511114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.520979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.521071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.521085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.521092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.521098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.521113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.531083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.531189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.531208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.531216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.531222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.531236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.541059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.541133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.541147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.541154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.541160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.541175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.551031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.551091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.551104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.551112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.551118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.551132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.561069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.561121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.561135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.561141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.561148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.561163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.571190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.571251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.571267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.571274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.571280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.571295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.581189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.581247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.581260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.581268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.581274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.581289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.591226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.591279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.591294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.591300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.591307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.591322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.601156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.601256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.601270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.601277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.601283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.601298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.611263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.611321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.611335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.611346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.611352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.611367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.621229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.621326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.621341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.621348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.621354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.621370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.631267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.631338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.631352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.631359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.370 [2024-11-20 19:04:34.631365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.370 [2024-11-20 19:04:34.631380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.370 qpair failed and we were unable to recover it. 00:27:12.370 [2024-11-20 19:04:34.641326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.370 [2024-11-20 19:04:34.641378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.370 [2024-11-20 19:04:34.641391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.370 [2024-11-20 19:04:34.641398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.371 [2024-11-20 19:04:34.641405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.371 [2024-11-20 19:04:34.641420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.371 qpair failed and we were unable to recover it. 00:27:12.371 [2024-11-20 19:04:34.651431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.371 [2024-11-20 19:04:34.651486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.371 [2024-11-20 19:04:34.651499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.371 [2024-11-20 19:04:34.651506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.371 [2024-11-20 19:04:34.651512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.371 [2024-11-20 19:04:34.651526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.371 qpair failed and we were unable to recover it. 00:27:12.371 [2024-11-20 19:04:34.661412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.371 [2024-11-20 19:04:34.661471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.371 [2024-11-20 19:04:34.661485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.371 [2024-11-20 19:04:34.661492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.371 [2024-11-20 19:04:34.661498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.371 [2024-11-20 19:04:34.661512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.371 qpair failed and we were unable to recover it. 00:27:12.371 [2024-11-20 19:04:34.671429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.371 [2024-11-20 19:04:34.671482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.371 [2024-11-20 19:04:34.671496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.371 [2024-11-20 19:04:34.671502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.371 [2024-11-20 19:04:34.671509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.371 [2024-11-20 19:04:34.671523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.371 qpair failed and we were unable to recover it. 00:27:12.371 [2024-11-20 19:04:34.681454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.371 [2024-11-20 19:04:34.681508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.371 [2024-11-20 19:04:34.681521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.371 [2024-11-20 19:04:34.681528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.371 [2024-11-20 19:04:34.681534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.371 [2024-11-20 19:04:34.681548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.371 qpair failed and we were unable to recover it. 00:27:12.371 [2024-11-20 19:04:34.691537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.371 [2024-11-20 19:04:34.691593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.371 [2024-11-20 19:04:34.691607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.371 [2024-11-20 19:04:34.691614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.371 [2024-11-20 19:04:34.691620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.371 [2024-11-20 19:04:34.691635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.371 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 19:04:34.701529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.631 [2024-11-20 19:04:34.701588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.631 [2024-11-20 19:04:34.701602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.631 [2024-11-20 19:04:34.701609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.631 [2024-11-20 19:04:34.701615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.631 [2024-11-20 19:04:34.701630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 19:04:34.711544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.631 [2024-11-20 19:04:34.711594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.631 [2024-11-20 19:04:34.711608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.631 [2024-11-20 19:04:34.711615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.631 [2024-11-20 19:04:34.711621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.631 [2024-11-20 19:04:34.711636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 19:04:34.721598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.631 [2024-11-20 19:04:34.721652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.631 [2024-11-20 19:04:34.721666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.631 [2024-11-20 19:04:34.721673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.631 [2024-11-20 19:04:34.721680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.631 [2024-11-20 19:04:34.721694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 19:04:34.731602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.631 [2024-11-20 19:04:34.731657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.631 [2024-11-20 19:04:34.731670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.631 [2024-11-20 19:04:34.731677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.631 [2024-11-20 19:04:34.731684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.631 [2024-11-20 19:04:34.731698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 19:04:34.741659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.631 [2024-11-20 19:04:34.741744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.631 [2024-11-20 19:04:34.741758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.631 [2024-11-20 19:04:34.741768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.631 [2024-11-20 19:04:34.741774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.631 [2024-11-20 19:04:34.741788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 19:04:34.751683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.631 [2024-11-20 19:04:34.751736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.631 [2024-11-20 19:04:34.751751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.631 [2024-11-20 19:04:34.751758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.631 [2024-11-20 19:04:34.751765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.631 [2024-11-20 19:04:34.751780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.631 qpair failed and we were unable to recover it. 00:27:12.631 [2024-11-20 19:04:34.761728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.631 [2024-11-20 19:04:34.761804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.631 [2024-11-20 19:04:34.761819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.761826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.761832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.761847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.771756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.771813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.771827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.771834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.771840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.771855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.781719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.781775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.781789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.781797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.781803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.781821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.791768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.791835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.791850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.791857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.791863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.791877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.801801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.801856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.801869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.801876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.801882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.801897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.811835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.811901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.811914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.811922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.811927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.811942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.821852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.821911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.821925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.821932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.821939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.821954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.831808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.831863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.831876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.831883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.831890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.831904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.841904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.841959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.841973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.841980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.841987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.842001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.851937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.851995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.852009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.852015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.852022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.852037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.861971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.862061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.862077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.862083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.862090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.862105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.871987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.872040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.872056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.872064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.872070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.872085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.882033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.882113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.632 [2024-11-20 19:04:34.882128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.632 [2024-11-20 19:04:34.882135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.632 [2024-11-20 19:04:34.882141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.632 [2024-11-20 19:04:34.882156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.632 qpair failed and we were unable to recover it. 00:27:12.632 [2024-11-20 19:04:34.892051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.632 [2024-11-20 19:04:34.892109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.633 [2024-11-20 19:04:34.892123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.633 [2024-11-20 19:04:34.892129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.633 [2024-11-20 19:04:34.892136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.633 [2024-11-20 19:04:34.892151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 19:04:34.902068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.633 [2024-11-20 19:04:34.902118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.633 [2024-11-20 19:04:34.902131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.633 [2024-11-20 19:04:34.902138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.633 [2024-11-20 19:04:34.902145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.633 [2024-11-20 19:04:34.902160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 19:04:34.912128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.633 [2024-11-20 19:04:34.912186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.633 [2024-11-20 19:04:34.912200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.633 [2024-11-20 19:04:34.912211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.633 [2024-11-20 19:04:34.912221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.633 [2024-11-20 19:04:34.912236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 19:04:34.922136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.633 [2024-11-20 19:04:34.922192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.633 [2024-11-20 19:04:34.922210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.633 [2024-11-20 19:04:34.922217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.633 [2024-11-20 19:04:34.922223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.633 [2024-11-20 19:04:34.922239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 19:04:34.932162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.633 [2024-11-20 19:04:34.932221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.633 [2024-11-20 19:04:34.932235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.633 [2024-11-20 19:04:34.932242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.633 [2024-11-20 19:04:34.932249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.633 [2024-11-20 19:04:34.932264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 19:04:34.942115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.633 [2024-11-20 19:04:34.942171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.633 [2024-11-20 19:04:34.942184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.633 [2024-11-20 19:04:34.942191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.633 [2024-11-20 19:04:34.942197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.633 [2024-11-20 19:04:34.942216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.633 [2024-11-20 19:04:34.952217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.633 [2024-11-20 19:04:34.952270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.633 [2024-11-20 19:04:34.952283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.633 [2024-11-20 19:04:34.952290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.633 [2024-11-20 19:04:34.952296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.633 [2024-11-20 19:04:34.952311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.633 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 19:04:34.962256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.893 [2024-11-20 19:04:34.962313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.893 [2024-11-20 19:04:34.962328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.893 [2024-11-20 19:04:34.962336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.893 [2024-11-20 19:04:34.962342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.893 [2024-11-20 19:04:34.962356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 19:04:34.972280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.893 [2024-11-20 19:04:34.972339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.893 [2024-11-20 19:04:34.972353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.893 [2024-11-20 19:04:34.972360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.893 [2024-11-20 19:04:34.972367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.893 [2024-11-20 19:04:34.972381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 19:04:34.982323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.893 [2024-11-20 19:04:34.982382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.893 [2024-11-20 19:04:34.982395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.893 [2024-11-20 19:04:34.982403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.893 [2024-11-20 19:04:34.982410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.893 [2024-11-20 19:04:34.982424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 19:04:34.992330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.893 [2024-11-20 19:04:34.992386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.893 [2024-11-20 19:04:34.992399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.893 [2024-11-20 19:04:34.992408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.893 [2024-11-20 19:04:34.992414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.893 [2024-11-20 19:04:34.992428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 19:04:35.002375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.893 [2024-11-20 19:04:35.002433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.893 [2024-11-20 19:04:35.002449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.893 [2024-11-20 19:04:35.002457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.893 [2024-11-20 19:04:35.002463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.893 [2024-11-20 19:04:35.002478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 19:04:35.012452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.893 [2024-11-20 19:04:35.012522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.893 [2024-11-20 19:04:35.012536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.893 [2024-11-20 19:04:35.012543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.893 [2024-11-20 19:04:35.012549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.893 [2024-11-20 19:04:35.012565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.022425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.022477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.022490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.022497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.022503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.022518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.032453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.032510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.032525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.032534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.032541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.032556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.042503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.042559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.042573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.042580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.042590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.042605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.052544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.052598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.052611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.052618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.052625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.052639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.062491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.062548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.062562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.062569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.062576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.062590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.072575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.072632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.072645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.072652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.072660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.072675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.082565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.082620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.082634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.082641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.082648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.082663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.092618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.092672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.092686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.092694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.092701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.092715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.102632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.102691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.102704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.102711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.102718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.102732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.112668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.112724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.112738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.112745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.112752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.112767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.122715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.122793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.122807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.122814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.122820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.122835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.132667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.132725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.132741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.132748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.132755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.132769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.142758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.894 [2024-11-20 19:04:35.142811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.894 [2024-11-20 19:04:35.142824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.894 [2024-11-20 19:04:35.142831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.894 [2024-11-20 19:04:35.142838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.894 [2024-11-20 19:04:35.142852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 19:04:35.152775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.895 [2024-11-20 19:04:35.152851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.895 [2024-11-20 19:04:35.152865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.895 [2024-11-20 19:04:35.152872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.895 [2024-11-20 19:04:35.152878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.895 [2024-11-20 19:04:35.152893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 19:04:35.162726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.895 [2024-11-20 19:04:35.162816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.895 [2024-11-20 19:04:35.162830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.895 [2024-11-20 19:04:35.162837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.895 [2024-11-20 19:04:35.162843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.895 [2024-11-20 19:04:35.162857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 19:04:35.172835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.895 [2024-11-20 19:04:35.172891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.895 [2024-11-20 19:04:35.172904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.895 [2024-11-20 19:04:35.172916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.895 [2024-11-20 19:04:35.172923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.895 [2024-11-20 19:04:35.172937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 19:04:35.182865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.895 [2024-11-20 19:04:35.182924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.895 [2024-11-20 19:04:35.182937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.895 [2024-11-20 19:04:35.182944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.895 [2024-11-20 19:04:35.182950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.895 [2024-11-20 19:04:35.182965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 19:04:35.192933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.895 [2024-11-20 19:04:35.193013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.895 [2024-11-20 19:04:35.193028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.895 [2024-11-20 19:04:35.193035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.895 [2024-11-20 19:04:35.193041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.895 [2024-11-20 19:04:35.193055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 19:04:35.202913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.895 [2024-11-20 19:04:35.202964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.895 [2024-11-20 19:04:35.202978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.895 [2024-11-20 19:04:35.202985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.895 [2024-11-20 19:04:35.202991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.895 [2024-11-20 19:04:35.203005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 19:04:35.212916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.895 [2024-11-20 19:04:35.212973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.895 [2024-11-20 19:04:35.212987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.895 [2024-11-20 19:04:35.212993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.895 [2024-11-20 19:04:35.213000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:12.895 [2024-11-20 19:04:35.213018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.895 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-20 19:04:35.222980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.155 [2024-11-20 19:04:35.223039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.155 [2024-11-20 19:04:35.223053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.155 [2024-11-20 19:04:35.223061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.155 [2024-11-20 19:04:35.223067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.155 [2024-11-20 19:04:35.223082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-20 19:04:35.233045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.155 [2024-11-20 19:04:35.233097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.155 [2024-11-20 19:04:35.233110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.155 [2024-11-20 19:04:35.233117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.155 [2024-11-20 19:04:35.233124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.155 [2024-11-20 19:04:35.233138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-20 19:04:35.243044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.155 [2024-11-20 19:04:35.243099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.155 [2024-11-20 19:04:35.243113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.155 [2024-11-20 19:04:35.243120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.155 [2024-11-20 19:04:35.243127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.155 [2024-11-20 19:04:35.243142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-20 19:04:35.253074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.155 [2024-11-20 19:04:35.253130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.155 [2024-11-20 19:04:35.253143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.155 [2024-11-20 19:04:35.253150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.155 [2024-11-20 19:04:35.253157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.155 [2024-11-20 19:04:35.253172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-20 19:04:35.263102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.155 [2024-11-20 19:04:35.263200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.155 [2024-11-20 19:04:35.263219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.155 [2024-11-20 19:04:35.263227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.155 [2024-11-20 19:04:35.263233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.155 [2024-11-20 19:04:35.263247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.155 qpair failed and we were unable to recover it. 00:27:13.155 [2024-11-20 19:04:35.273140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.273193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.273213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.273220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.273226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.273241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.283150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.283215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.283228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.283236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.283242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.283256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.293194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.293273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.293289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.293295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.293301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.293319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.303199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.303254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.303267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.303278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.303284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.303299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.313251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.313322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.313336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.313343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.313349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.313364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.323278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.323336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.323349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.323357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.323363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.323378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.333311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.333367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.333381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.333388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.333395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.333410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.343352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.343419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.343432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.343439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.343446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.343463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.353362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.353417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.353430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.353437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.353443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.353458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.363384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.363459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.363473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.363480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.363486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.363500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.373421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.373474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.373488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.373494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.373502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.373517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.383449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.383528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.383543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.383550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.383556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.383570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.393477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.156 [2024-11-20 19:04:35.393532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.156 [2024-11-20 19:04:35.393546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.156 [2024-11-20 19:04:35.393553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.156 [2024-11-20 19:04:35.393559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.156 [2024-11-20 19:04:35.393574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.156 qpair failed and we were unable to recover it. 00:27:13.156 [2024-11-20 19:04:35.403426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.403478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.403491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.403498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.403505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.403519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-20 19:04:35.413543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.413601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.413614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.413621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.413628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.413642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-20 19:04:35.423566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.423627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.423640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.423648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.423654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.423669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-20 19:04:35.433601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.433655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.433671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.433678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.433685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.433700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-20 19:04:35.443623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.443676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.443690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.443696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.443703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.443718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-20 19:04:35.453657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.453717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.453731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.453738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.453745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.453759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-20 19:04:35.463681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.463737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.463751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.463757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.463764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.463778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.157 [2024-11-20 19:04:35.473703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.157 [2024-11-20 19:04:35.473757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.157 [2024-11-20 19:04:35.473770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.157 [2024-11-20 19:04:35.473777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.157 [2024-11-20 19:04:35.473786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.157 [2024-11-20 19:04:35.473801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.157 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.483738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.417 [2024-11-20 19:04:35.483796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.417 [2024-11-20 19:04:35.483809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.417 [2024-11-20 19:04:35.483816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.417 [2024-11-20 19:04:35.483822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.417 [2024-11-20 19:04:35.483837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.493769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.417 [2024-11-20 19:04:35.493834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.417 [2024-11-20 19:04:35.493849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.417 [2024-11-20 19:04:35.493856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.417 [2024-11-20 19:04:35.493862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.417 [2024-11-20 19:04:35.493877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.503800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.417 [2024-11-20 19:04:35.503859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.417 [2024-11-20 19:04:35.503873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.417 [2024-11-20 19:04:35.503881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.417 [2024-11-20 19:04:35.503887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.417 [2024-11-20 19:04:35.503902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.513811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.417 [2024-11-20 19:04:35.513867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.417 [2024-11-20 19:04:35.513881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.417 [2024-11-20 19:04:35.513888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.417 [2024-11-20 19:04:35.513895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.417 [2024-11-20 19:04:35.513910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.523844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.417 [2024-11-20 19:04:35.523898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.417 [2024-11-20 19:04:35.523911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.417 [2024-11-20 19:04:35.523918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.417 [2024-11-20 19:04:35.523925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.417 [2024-11-20 19:04:35.523940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.533876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.417 [2024-11-20 19:04:35.533930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.417 [2024-11-20 19:04:35.533943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.417 [2024-11-20 19:04:35.533950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.417 [2024-11-20 19:04:35.533957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.417 [2024-11-20 19:04:35.533972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.543907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.417 [2024-11-20 19:04:35.543963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.417 [2024-11-20 19:04:35.543976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.417 [2024-11-20 19:04:35.543983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.417 [2024-11-20 19:04:35.543989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.417 [2024-11-20 19:04:35.544004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 19:04:35.553931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.553990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.554003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.554010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.554016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.554031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.563952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.564009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.564026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.564034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.564040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.564054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.574019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.574081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.574095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.574103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.574109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.574124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.584018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.584068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.584083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.584090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.584097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.584111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.594046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.594101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.594115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.594122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.594129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.594143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.604076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.604131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.604144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.604151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.604161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.604176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.614103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.614159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.614173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.614180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.614186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.614200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.624135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.624189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.624206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.624213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.624220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.624236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.634140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.634190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.634208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.634215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.634221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.634237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.644211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.418 [2024-11-20 19:04:35.644264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.418 [2024-11-20 19:04:35.644278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.418 [2024-11-20 19:04:35.644285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.418 [2024-11-20 19:04:35.644291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:27:13.418 [2024-11-20 19:04:35.644307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 19:04:35.644416] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:13.418 A controller has encountered a failure and is being reset. 00:27:13.418 [2024-11-20 19:04:35.644524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78af0 (9): Bad file descriptor 00:27:13.418 Controller properly reset. 00:27:13.677 Initializing NVMe Controllers 00:27:13.677 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:13.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:13.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:13.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:13.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:13.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:13.678 Initialization complete. Launching workers. 00:27:13.678 Starting thread on core 1 00:27:13.678 Starting thread on core 2 00:27:13.678 Starting thread on core 3 00:27:13.678 Starting thread on core 0 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:13.678 00:27:13.678 real 0m11.439s 00:27:13.678 user 0m21.881s 00:27:13.678 sys 0m4.734s 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.678 ************************************ 00:27:13.678 END TEST nvmf_target_disconnect_tc2 00:27:13.678 ************************************ 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.678 rmmod nvme_tcp 00:27:13.678 rmmod nvme_fabrics 00:27:13.678 rmmod nvme_keyring 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3804623 ']' 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3804623 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3804623 ']' 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3804623 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3804623 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3804623' 00:27:13.678 killing process with pid 3804623 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3804623 00:27:13.678 19:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3804623 00:27:13.937 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.937 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.938 19:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.843 19:04:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.843 00:27:15.843 real 0m20.194s 00:27:15.843 user 0m49.696s 00:27:15.843 sys 0m9.628s 00:27:15.843 19:04:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.843 19:04:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:15.843 ************************************ 00:27:15.843 END TEST nvmf_target_disconnect 00:27:15.843 ************************************ 00:27:16.123 19:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:16.123 00:27:16.123 real 5m55.659s 00:27:16.123 user 10m41.489s 00:27:16.123 sys 1m59.160s 00:27:16.123 19:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.123 19:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.123 ************************************ 00:27:16.123 END TEST nvmf_host 00:27:16.123 ************************************ 00:27:16.123 19:04:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:16.123 19:04:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:16.123 19:04:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:16.123 19:04:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:16.123 19:04:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.123 19:04:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.123 ************************************ 00:27:16.123 START TEST nvmf_target_core_interrupt_mode 00:27:16.123 ************************************ 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:16.123 * Looking for test storage... 00:27:16.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.123 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.444 --rc genhtml_branch_coverage=1 00:27:16.444 --rc genhtml_function_coverage=1 00:27:16.444 --rc genhtml_legend=1 00:27:16.444 --rc geninfo_all_blocks=1 00:27:16.444 --rc geninfo_unexecuted_blocks=1 00:27:16.444 00:27:16.444 ' 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.444 --rc genhtml_branch_coverage=1 00:27:16.444 --rc genhtml_function_coverage=1 00:27:16.444 --rc genhtml_legend=1 00:27:16.444 --rc geninfo_all_blocks=1 00:27:16.444 --rc geninfo_unexecuted_blocks=1 00:27:16.444 00:27:16.444 ' 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.444 --rc genhtml_branch_coverage=1 00:27:16.444 --rc genhtml_function_coverage=1 00:27:16.444 --rc genhtml_legend=1 00:27:16.444 --rc geninfo_all_blocks=1 00:27:16.444 --rc geninfo_unexecuted_blocks=1 00:27:16.444 00:27:16.444 ' 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.444 --rc genhtml_branch_coverage=1 00:27:16.444 --rc genhtml_function_coverage=1 00:27:16.444 --rc genhtml_legend=1 00:27:16.444 --rc geninfo_all_blocks=1 00:27:16.444 --rc geninfo_unexecuted_blocks=1 00:27:16.444 00:27:16.444 ' 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.444 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:16.445 ************************************ 00:27:16.445 START TEST nvmf_abort 00:27:16.445 ************************************ 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:16.445 * Looking for test storage... 00:27:16.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:16.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.445 --rc genhtml_branch_coverage=1 00:27:16.445 --rc genhtml_function_coverage=1 00:27:16.445 --rc genhtml_legend=1 00:27:16.445 --rc geninfo_all_blocks=1 00:27:16.445 --rc geninfo_unexecuted_blocks=1 00:27:16.445 00:27:16.445 ' 00:27:16.445 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:16.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.446 --rc genhtml_branch_coverage=1 00:27:16.446 --rc genhtml_function_coverage=1 00:27:16.446 --rc genhtml_legend=1 00:27:16.446 --rc geninfo_all_blocks=1 00:27:16.446 --rc geninfo_unexecuted_blocks=1 00:27:16.446 00:27:16.446 ' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:16.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.446 --rc genhtml_branch_coverage=1 00:27:16.446 --rc genhtml_function_coverage=1 00:27:16.446 --rc genhtml_legend=1 00:27:16.446 --rc geninfo_all_blocks=1 00:27:16.446 --rc geninfo_unexecuted_blocks=1 00:27:16.446 00:27:16.446 ' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:16.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.446 --rc genhtml_branch_coverage=1 00:27:16.446 --rc genhtml_function_coverage=1 00:27:16.446 --rc genhtml_legend=1 00:27:16.446 --rc geninfo_all_blocks=1 00:27:16.446 --rc geninfo_unexecuted_blocks=1 00:27:16.446 00:27:16.446 ' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.446 19:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:23.023 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:23.023 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.023 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:23.024 Found net devices under 0000:86:00.0: cvl_0_0 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:23.024 Found net devices under 0000:86:00.1: cvl_0_1 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:23.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:27:23.024 00:27:23.024 --- 10.0.0.2 ping statistics --- 00:27:23.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.024 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:27:23.024 00:27:23.024 --- 10.0.0.1 ping statistics --- 00:27:23.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.024 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3809322 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3809322 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3809322 ']' 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.024 [2024-11-20 19:04:44.723964] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:23.024 [2024-11-20 19:04:44.724870] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:27:23.024 [2024-11-20 19:04:44.724903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.024 [2024-11-20 19:04:44.803029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:23.024 [2024-11-20 19:04:44.844086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.024 [2024-11-20 19:04:44.844122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.024 [2024-11-20 19:04:44.844129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.024 [2024-11-20 19:04:44.844134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.024 [2024-11-20 19:04:44.844139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.024 [2024-11-20 19:04:44.845566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.024 [2024-11-20 19:04:44.845583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.024 [2024-11-20 19:04:44.845588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.024 [2024-11-20 19:04:44.912250] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:23.024 [2024-11-20 19:04:44.913019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:23.024 [2024-11-20 19:04:44.913363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:23.024 [2024-11-20 19:04:44.913436] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.024 [2024-11-20 19:04:44.982449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.024 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.025 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:23.025 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.025 19:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.025 Malloc0 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.025 Delay0 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.025 [2024-11-20 19:04:45.070367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.025 19:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:23.025 [2024-11-20 19:04:45.195902] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:25.559 Initializing NVMe Controllers 00:27:25.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:25.559 controller IO queue size 128 less than required 00:27:25.559 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:25.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:25.559 Initialization complete. Launching workers. 00:27:25.559 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38362 00:27:25.559 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38419, failed to submit 66 00:27:25.559 success 38362, unsuccessful 57, failed 0 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.559 rmmod nvme_tcp 00:27:25.559 rmmod nvme_fabrics 00:27:25.559 rmmod nvme_keyring 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3809322 ']' 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3809322 00:27:25.559 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3809322 ']' 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3809322 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3809322 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3809322' 00:27:25.560 killing process with pid 3809322 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3809322 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3809322 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.560 19:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.466 00:27:27.466 real 0m11.180s 00:27:27.466 user 0m10.537s 00:27:27.466 sys 0m5.640s 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.466 ************************************ 00:27:27.466 END TEST nvmf_abort 00:27:27.466 ************************************ 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:27.466 ************************************ 00:27:27.466 START TEST nvmf_ns_hotplug_stress 00:27:27.466 ************************************ 00:27:27.466 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:27.725 * Looking for test storage... 00:27:27.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:27.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.726 --rc genhtml_branch_coverage=1 00:27:27.726 --rc genhtml_function_coverage=1 00:27:27.726 --rc genhtml_legend=1 00:27:27.726 --rc geninfo_all_blocks=1 00:27:27.726 --rc geninfo_unexecuted_blocks=1 00:27:27.726 00:27:27.726 ' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:27.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.726 --rc genhtml_branch_coverage=1 00:27:27.726 --rc genhtml_function_coverage=1 00:27:27.726 --rc genhtml_legend=1 00:27:27.726 --rc geninfo_all_blocks=1 00:27:27.726 --rc geninfo_unexecuted_blocks=1 00:27:27.726 00:27:27.726 ' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:27.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.726 --rc genhtml_branch_coverage=1 00:27:27.726 --rc genhtml_function_coverage=1 00:27:27.726 --rc genhtml_legend=1 00:27:27.726 --rc geninfo_all_blocks=1 00:27:27.726 --rc geninfo_unexecuted_blocks=1 00:27:27.726 00:27:27.726 ' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:27.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.726 --rc genhtml_branch_coverage=1 00:27:27.726 --rc genhtml_function_coverage=1 00:27:27.726 --rc genhtml_legend=1 00:27:27.726 --rc geninfo_all_blocks=1 00:27:27.726 --rc geninfo_unexecuted_blocks=1 00:27:27.726 00:27:27.726 ' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:27.726 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.727 19:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:34.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:34.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:34.297 Found net devices under 0000:86:00.0: cvl_0_0 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:34.297 Found net devices under 0000:86:00.1: cvl_0_1 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:34.297 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:34.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:27:34.298 00:27:34.298 --- 10.0.0.2 ping statistics --- 00:27:34.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.298 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:27:34.298 00:27:34.298 --- 10.0.0.1 ping statistics --- 00:27:34.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.298 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3813155 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3813155 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3813155 ']' 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.298 19:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.298 [2024-11-20 19:04:55.915064] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:34.298 [2024-11-20 19:04:55.915933] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:27:34.298 [2024-11-20 19:04:55.915968] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.298 [2024-11-20 19:04:55.994856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.298 [2024-11-20 19:04:56.037010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.298 [2024-11-20 19:04:56.037046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.298 [2024-11-20 19:04:56.037053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.298 [2024-11-20 19:04:56.037060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.298 [2024-11-20 19:04:56.037066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.298 [2024-11-20 19:04:56.038519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.298 [2024-11-20 19:04:56.038630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.298 [2024-11-20 19:04:56.038630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.298 [2024-11-20 19:04:56.107002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:34.298 [2024-11-20 19:04:56.107786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:34.298 [2024-11-20 19:04:56.107969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:34.298 [2024-11-20 19:04:56.108124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:34.298 [2024-11-20 19:04:56.347413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:34.298 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.557 [2024-11-20 19:04:56.747920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.558 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:34.817 19:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:35.076 Malloc0 00:27:35.076 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:35.076 Delay0 00:27:35.076 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.335 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:35.594 NULL1 00:27:35.594 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:35.853 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3813633 00:27:35.853 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:35.853 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:35.853 19:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.231 Read completed with error (sct=0, sc=11) 00:27:37.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.231 19:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.231 19:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:37.231 19:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:37.489 true 00:27:37.489 19:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:37.489 19:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.425 19:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.425 19:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:38.425 19:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:38.684 true 00:27:38.684 19:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:38.684 19:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.943 19:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.943 19:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:38.943 19:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:39.201 true 00:27:39.201 19:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:39.201 19:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.579 19:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.579 19:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:40.579 19:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:40.837 true 00:27:40.837 19:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:40.837 19:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.837 19:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.096 19:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:41.096 19:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:41.354 true 00:27:41.354 19:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:41.354 19:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.729 19:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.729 19:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:42.729 19:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:42.987 true 00:27:42.987 19:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:42.987 19:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.924 19:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.924 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:43.924 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:44.183 true 00:27:44.183 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:44.183 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.441 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.700 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:44.700 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:44.700 true 00:27:44.700 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:44.700 19:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.894 19:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.894 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:45.894 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:46.152 true 00:27:46.152 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:46.152 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.411 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.670 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:46.670 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:46.670 true 00:27:46.670 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:46.670 19:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.047 19:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.047 19:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:48.047 19:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:48.305 true 00:27:48.305 19:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:48.305 19:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.239 19:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.239 19:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:49.239 19:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:49.497 true 00:27:49.497 19:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:49.497 19:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.755 19:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.014 19:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:50.014 19:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:50.272 true 00:27:50.272 19:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:50.272 19:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.206 19:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.462 19:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:51.462 19:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:51.462 true 00:27:51.462 19:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:51.462 19:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.719 19:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.976 19:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:51.976 19:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:52.234 true 00:27:52.234 19:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:52.234 19:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.606 19:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.606 19:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:53.606 19:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:53.864 true 00:27:53.864 19:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:53.864 19:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.799 19:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.799 19:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:54.799 19:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:55.058 true 00:27:55.058 19:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:55.058 19:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.058 19:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.316 19:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:55.316 19:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:55.575 true 00:27:55.575 19:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:55.575 19:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.510 19:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.769 19:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:56.769 19:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:57.029 true 00:27:57.030 19:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:57.030 19:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.289 19:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.289 19:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:57.289 19:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:57.548 true 00:27:57.548 19:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:57.548 19:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.922 19:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.923 19:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:58.923 19:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:59.181 true 00:27:59.181 19:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:27:59.181 19:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.116 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.116 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:00.116 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:00.375 true 00:28:00.375 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:28:00.375 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.635 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.635 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:00.635 19:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:00.894 true 00:28:00.894 19:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:28:00.894 19:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.828 19:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.086 19:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:02.086 19:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:02.345 true 00:28:02.345 19:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:28:02.345 19:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.282 19:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.282 19:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:03.282 19:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:03.540 true 00:28:03.540 19:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:28:03.540 19:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.799 19:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.057 19:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:04.057 19:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:04.057 true 00:28:04.316 19:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:28:04.316 19:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.254 19:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.512 19:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:05.512 19:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:05.770 true 00:28:05.770 19:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:28:05.770 19:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.705 19:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.705 Initializing NVMe Controllers 00:28:06.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.705 Controller IO queue size 128, less than required. 00:28:06.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.705 Controller IO queue size 128, less than required. 00:28:06.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:06.705 Initialization complete. Launching workers. 00:28:06.705 ======================================================== 00:28:06.705 Latency(us) 00:28:06.705 Device Information : IOPS MiB/s Average min max 00:28:06.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1929.63 0.94 45370.37 2323.06 1021358.41 00:28:06.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17854.53 8.72 7168.82 1325.85 370457.64 00:28:06.705 ======================================================== 00:28:06.705 Total : 19784.16 9.66 10894.78 1325.85 1021358.41 00:28:06.705 00:28:06.705 19:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:06.705 19:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:06.963 true 00:28:06.963 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3813633 00:28:06.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3813633) - No such process 00:28:06.963 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3813633 00:28:06.963 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.221 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:07.480 null0 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.480 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:07.739 null1 00:28:07.739 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.739 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.739 19:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:08.022 null2 00:28:08.022 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.022 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.022 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:08.022 null3 00:28:08.022 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.022 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.022 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:08.280 null4 00:28:08.280 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.280 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.280 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:08.539 null5 00:28:08.539 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.539 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.539 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:08.539 null6 00:28:08.539 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.539 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.539 19:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:08.799 null7 00:28:08.799 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:08.799 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.799 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3818968 3818969 3818971 3818973 3818974 3818976 3818978 3818980 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.800 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.059 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.318 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.577 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.836 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.837 19:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.837 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.103 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.362 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.363 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.621 19:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.880 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.881 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.141 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.403 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.695 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.696 19:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.980 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.251 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.520 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.521 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.521 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:12.521 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.521 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.521 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.779 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.779 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.779 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.779 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.779 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.779 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.779 19:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.779 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.037 rmmod nvme_tcp 00:28:13.037 rmmod nvme_fabrics 00:28:13.037 rmmod nvme_keyring 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3813155 ']' 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3813155 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3813155 ']' 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3813155 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.037 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3813155 00:28:13.038 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.038 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.038 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3813155' 00:28:13.038 killing process with pid 3813155 00:28:13.038 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3813155 00:28:13.038 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3813155 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.296 19:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.830 00:28:15.830 real 0m47.820s 00:28:15.830 user 2m58.674s 00:28:15.830 sys 0m20.051s 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:15.830 ************************************ 00:28:15.830 END TEST nvmf_ns_hotplug_stress 00:28:15.830 ************************************ 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:15.830 ************************************ 00:28:15.830 START TEST nvmf_delete_subsystem 00:28:15.830 ************************************ 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:15.830 * Looking for test storage... 00:28:15.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.830 --rc genhtml_branch_coverage=1 00:28:15.830 --rc genhtml_function_coverage=1 00:28:15.830 --rc genhtml_legend=1 00:28:15.830 --rc geninfo_all_blocks=1 00:28:15.830 --rc geninfo_unexecuted_blocks=1 00:28:15.830 00:28:15.830 ' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.830 --rc genhtml_branch_coverage=1 00:28:15.830 --rc genhtml_function_coverage=1 00:28:15.830 --rc genhtml_legend=1 00:28:15.830 --rc geninfo_all_blocks=1 00:28:15.830 --rc geninfo_unexecuted_blocks=1 00:28:15.830 00:28:15.830 ' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.830 --rc genhtml_branch_coverage=1 00:28:15.830 --rc genhtml_function_coverage=1 00:28:15.830 --rc genhtml_legend=1 00:28:15.830 --rc geninfo_all_blocks=1 00:28:15.830 --rc geninfo_unexecuted_blocks=1 00:28:15.830 00:28:15.830 ' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.830 --rc genhtml_branch_coverage=1 00:28:15.830 --rc genhtml_function_coverage=1 00:28:15.830 --rc genhtml_legend=1 00:28:15.830 --rc geninfo_all_blocks=1 00:28:15.830 --rc geninfo_unexecuted_blocks=1 00:28:15.830 00:28:15.830 ' 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.830 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.831 19:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.395 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:22.396 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:22.396 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:22.396 Found net devices under 0000:86:00.0: cvl_0_0 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:22.396 Found net devices under 0000:86:00.1: cvl_0_1 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:28:22.396 00:28:22.396 --- 10.0.0.2 ping statistics --- 00:28:22.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.396 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:28:22.396 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:22.397 00:28:22.397 --- 10.0.0.1 ping statistics --- 00:28:22.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.397 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3823343 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3823343 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3823343 ']' 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.397 19:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 [2024-11-20 19:05:43.842845] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:22.397 [2024-11-20 19:05:43.843730] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:28:22.397 [2024-11-20 19:05:43.843762] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.397 [2024-11-20 19:05:43.922777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:22.397 [2024-11-20 19:05:43.963252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.397 [2024-11-20 19:05:43.963289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.397 [2024-11-20 19:05:43.963296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.397 [2024-11-20 19:05:43.963302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.397 [2024-11-20 19:05:43.963308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.397 [2024-11-20 19:05:43.964503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.397 [2024-11-20 19:05:43.964504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.397 [2024-11-20 19:05:44.031402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:22.397 [2024-11-20 19:05:44.031734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:22.397 [2024-11-20 19:05:44.032031] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 [2024-11-20 19:05:44.097298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 [2024-11-20 19:05:44.125652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 NULL1 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 Delay0 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3823371 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:22.397 19:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:22.397 [2024-11-20 19:05:44.240569] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:24.298 19:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.298 19:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.298 19:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.298 Read completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 Write completed with error (sct=0, sc=8) 00:28:24.298 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 [2024-11-20 19:05:46.328617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2031680 is same with the state(6) to be set 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Write completed with error (sct=0, sc=8) 00:28:24.299 starting I/O failed: -6 00:28:24.299 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:24.300 Write completed with error (sct=0, sc=8) 00:28:24.300 Read completed with error (sct=0, sc=8) 00:28:24.300 starting I/O failed: -6 00:28:25.234 [2024-11-20 19:05:47.293832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20329a0 is same with the state(6) to be set 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 [2024-11-20 19:05:47.330157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2bcc00d350 is same with the state(6) to be set 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 [2024-11-20 19:05:47.331129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20314a0 is same with the state(6) to be set 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 [2024-11-20 19:05:47.331291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2031860 is same with the state(6) to be set 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 Read completed with error (sct=0, sc=8) 00:28:25.234 Write completed with error (sct=0, sc=8) 00:28:25.234 [2024-11-20 19:05:47.331784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20312c0 is same with the state(6) to be set 00:28:25.234 Initializing NVMe Controllers 00:28:25.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.234 Controller IO queue size 128, less than required. 00:28:25.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:25.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:25.235 Initialization complete. Launching workers. 00:28:25.235 ======================================================== 00:28:25.235 Latency(us) 00:28:25.235 Device Information : IOPS MiB/s Average min max 00:28:25.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.53 0.10 943667.28 741.52 1011387.35 00:28:25.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 177.66 0.09 848190.86 431.20 1010657.94 00:28:25.235 ======================================================== 00:28:25.235 Total : 373.19 0.18 898214.41 431.20 1011387.35 00:28:25.235 00:28:25.235 [2024-11-20 19:05:47.332388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20329a0 (9): Bad file descriptor 00:28:25.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:25.235 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.235 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:25.235 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3823371 00:28:25.235 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3823371 00:28:25.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3823371) - No such process 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3823371 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3823371 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3823371 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.801 [2024-11-20 19:05:47.861533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3824049 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:25.801 19:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:25.802 [2024-11-20 19:05:47.944396] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:26.059 19:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.059 19:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:26.059 19:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.626 19:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.626 19:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:26.626 19:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.193 19:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.193 19:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:27.193 19:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.760 19:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.761 19:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:27.761 19:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.327 19:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.327 19:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:28.327 19:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.586 19:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.586 19:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:28.586 19:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.846 Initializing NVMe Controllers 00:28:28.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.846 Controller IO queue size 128, less than required. 00:28:28.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:28.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:28.846 Initialization complete. Launching workers. 00:28:28.846 ======================================================== 00:28:28.846 Latency(us) 00:28:28.846 Device Information : IOPS MiB/s Average min max 00:28:28.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003391.23 1000137.96 1041049.29 00:28:28.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004328.96 1000169.24 1011458.17 00:28:28.846 ======================================================== 00:28:28.846 Total : 256.00 0.12 1003860.09 1000137.96 1041049.29 00:28:28.846 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3824049 00:28:29.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3824049) - No such process 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3824049 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.105 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.105 rmmod nvme_tcp 00:28:29.363 rmmod nvme_fabrics 00:28:29.363 rmmod nvme_keyring 00:28:29.363 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.363 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:29.363 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3823343 ']' 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3823343 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3823343 ']' 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3823343 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3823343 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3823343' 00:28:29.364 killing process with pid 3823343 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3823343 00:28:29.364 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3823343 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.623 19:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.527 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.527 00:28:31.527 real 0m16.121s 00:28:31.527 user 0m25.973s 00:28:31.527 sys 0m6.067s 00:28:31.527 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.527 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.527 ************************************ 00:28:31.527 END TEST nvmf_delete_subsystem 00:28:31.527 ************************************ 00:28:31.527 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:31.528 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:31.528 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.528 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:31.528 ************************************ 00:28:31.528 START TEST nvmf_host_management 00:28:31.528 ************************************ 00:28:31.528 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:31.787 * Looking for test storage... 00:28:31.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:31.787 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:31.787 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:31.787 19:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.787 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:31.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.788 --rc genhtml_branch_coverage=1 00:28:31.788 --rc genhtml_function_coverage=1 00:28:31.788 --rc genhtml_legend=1 00:28:31.788 --rc geninfo_all_blocks=1 00:28:31.788 --rc geninfo_unexecuted_blocks=1 00:28:31.788 00:28:31.788 ' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:31.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.788 --rc genhtml_branch_coverage=1 00:28:31.788 --rc genhtml_function_coverage=1 00:28:31.788 --rc genhtml_legend=1 00:28:31.788 --rc geninfo_all_blocks=1 00:28:31.788 --rc geninfo_unexecuted_blocks=1 00:28:31.788 00:28:31.788 ' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:31.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.788 --rc genhtml_branch_coverage=1 00:28:31.788 --rc genhtml_function_coverage=1 00:28:31.788 --rc genhtml_legend=1 00:28:31.788 --rc geninfo_all_blocks=1 00:28:31.788 --rc geninfo_unexecuted_blocks=1 00:28:31.788 00:28:31.788 ' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:31.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.788 --rc genhtml_branch_coverage=1 00:28:31.788 --rc genhtml_function_coverage=1 00:28:31.788 --rc genhtml_legend=1 00:28:31.788 --rc geninfo_all_blocks=1 00:28:31.788 --rc geninfo_unexecuted_blocks=1 00:28:31.788 00:28:31.788 ' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.788 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.789 19:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.359 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:38.360 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:38.360 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:38.360 Found net devices under 0000:86:00.0: cvl_0_0 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:38.360 Found net devices under 0000:86:00.1: cvl_0_1 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:28:38.360 00:28:38.360 --- 10.0.0.2 ping statistics --- 00:28:38.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.360 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:28:38.360 00:28:38.360 --- 10.0.0.1 ping statistics --- 00:28:38.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.360 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3828045 00:28:38.360 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3828045 00:28:38.361 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:38.361 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3828045 ']' 00:28:38.361 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.361 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.361 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.361 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.361 19:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.361 [2024-11-20 19:06:00.032390] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:38.361 [2024-11-20 19:06:00.033322] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:28:38.361 [2024-11-20 19:06:00.033359] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.361 [2024-11-20 19:06:00.118386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.361 [2024-11-20 19:06:00.161838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.361 [2024-11-20 19:06:00.161876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.361 [2024-11-20 19:06:00.161883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.361 [2024-11-20 19:06:00.161889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.361 [2024-11-20 19:06:00.161894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.361 [2024-11-20 19:06:00.163333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.361 [2024-11-20 19:06:00.163440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.361 [2024-11-20 19:06:00.163548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.361 [2024-11-20 19:06:00.163549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:38.361 [2024-11-20 19:06:00.232943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:38.361 [2024-11-20 19:06:00.233869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:38.361 [2024-11-20 19:06:00.234001] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:38.361 [2024-11-20 19:06:00.234393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:38.361 [2024-11-20 19:06:00.234434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.619 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.620 [2024-11-20 19:06:00.916197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.878 19:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.878 Malloc0 00:28:38.878 [2024-11-20 19:06:01.004467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3828386 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3828386 /var/tmp/bdevperf.sock 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3828386 ']' 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:38.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.878 { 00:28:38.878 "params": { 00:28:38.878 "name": "Nvme$subsystem", 00:28:38.878 "trtype": "$TEST_TRANSPORT", 00:28:38.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.878 "adrfam": "ipv4", 00:28:38.878 "trsvcid": "$NVMF_PORT", 00:28:38.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.878 "hdgst": ${hdgst:-false}, 00:28:38.878 "ddgst": ${ddgst:-false} 00:28:38.878 }, 00:28:38.878 "method": "bdev_nvme_attach_controller" 00:28:38.878 } 00:28:38.878 EOF 00:28:38.878 )") 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:38.878 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:38.878 "params": { 00:28:38.878 "name": "Nvme0", 00:28:38.878 "trtype": "tcp", 00:28:38.878 "traddr": "10.0.0.2", 00:28:38.878 "adrfam": "ipv4", 00:28:38.878 "trsvcid": "4420", 00:28:38.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.878 "hdgst": false, 00:28:38.878 "ddgst": false 00:28:38.878 }, 00:28:38.878 "method": "bdev_nvme_attach_controller" 00:28:38.878 }' 00:28:38.878 [2024-11-20 19:06:01.098416] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:28:38.878 [2024-11-20 19:06:01.098465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828386 ] 00:28:38.878 [2024-11-20 19:06:01.175624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.136 [2024-11-20 19:06:01.217375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.396 Running I/O for 10 seconds... 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=102 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 102 -ge 100 ']' 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.396 [2024-11-20 19:06:01.595930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9efd70 is same with the state(6) to be set 00:28:39.396 [2024-11-20 19:06:01.595971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9efd70 is same with the state(6) to be set 00:28:39.396 [2024-11-20 19:06:01.595979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9efd70 is same with the state(6) to be set 00:28:39.396 [2024-11-20 19:06:01.595986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9efd70 is same with the state(6) to be set 00:28:39.396 [2024-11-20 19:06:01.595997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9efd70 is same with the state(6) to be set 00:28:39.396 [2024-11-20 19:06:01.596003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9efd70 is same with the state(6) to be set 00:28:39.396 [2024-11-20 19:06:01.600662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.396 [2024-11-20 19:06:01.600695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.396 [2024-11-20 19:06:01.600716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.396 [2024-11-20 19:06:01.600731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.396 [2024-11-20 19:06:01.600745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.396 [2024-11-20 19:06:01.600752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd99500 is same with the state(6) to be set 00:28:39.396 [2024-11-20 19:06:01.600818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.396 [2024-11-20 19:06:01.600828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.396 [2024-11-20 19:06:01.600847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.396 [2024-11-20 19:06:01.600864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.396 [2024-11-20 19:06:01.600880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.396 [2024-11-20 19:06:01.600895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.396 [2024-11-20 19:06:01.600910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.396 [2024-11-20 19:06:01.600918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.396 [2024-11-20 19:06:01.600926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.600940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.600947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.600957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.600964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.600980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.600990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.600998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:39.397 [2024-11-20 19:06:01.601149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.397 [2024-11-20 19:06:01.601486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.397 [2024-11-20 19:06:01.601552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.397 [2024-11-20 19:06:01.601559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.398 [2024-11-20 19:06:01.601758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.601851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.398 [2024-11-20 19:06:01.601860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.602796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:39.398 task offset: 24576 on job bdev=Nvme0n1 fails 00:28:39.398 00:28:39.398 Latency(us) 00:28:39.398 [2024-11-20T18:06:01.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.398 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.398 Job: Nvme0n1 ended in about 0.11 seconds with error 00:28:39.398 Verification LBA range: start 0x0 length 0x400 00:28:39.398 Nvme0n1 : 0.11 1780.98 111.31 593.66 0.00 24828.82 1771.03 26464.06 00:28:39.398 [2024-11-20T18:06:01.723Z] =================================================================================================================== 00:28:39.398 [2024-11-20T18:06:01.723Z] Total : 1780.98 111.31 593.66 0.00 24828.82 1771.03 26464.06 00:28:39.398 [2024-11-20 19:06:01.605137] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:39.398 [2024-11-20 19:06:01.605158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd99500 (9): Bad file descriptor 00:28:39.398 [2024-11-20 19:06:01.606065] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:39.398 [2024-11-20 19:06:01.606143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:39.398 [2024-11-20 19:06:01.606165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.398 [2024-11-20 19:06:01.606180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:39.398 [2024-11-20 19:06:01.606188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:39.398 [2024-11-20 19:06:01.606194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.398 [2024-11-20 19:06:01.606207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd99500 00:28:39.398 [2024-11-20 19:06:01.606228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd99500 (9): Bad file descriptor 00:28:39.398 [2024-11-20 19:06:01.606242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:39.398 [2024-11-20 19:06:01.606249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:39.398 [2024-11-20 19:06:01.606257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:39.398 [2024-11-20 19:06:01.606266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:39.398 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.398 19:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3828386 00:28:40.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3828386) - No such process 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.334 { 00:28:40.334 "params": { 00:28:40.334 "name": "Nvme$subsystem", 00:28:40.334 "trtype": "$TEST_TRANSPORT", 00:28:40.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.334 "adrfam": "ipv4", 00:28:40.334 "trsvcid": "$NVMF_PORT", 00:28:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.334 "hdgst": ${hdgst:-false}, 00:28:40.334 "ddgst": ${ddgst:-false} 00:28:40.334 }, 00:28:40.334 "method": "bdev_nvme_attach_controller" 00:28:40.334 } 00:28:40.334 EOF 00:28:40.334 )") 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:40.334 19:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:40.334 "params": { 00:28:40.334 "name": "Nvme0", 00:28:40.334 "trtype": "tcp", 00:28:40.334 "traddr": "10.0.0.2", 00:28:40.334 "adrfam": "ipv4", 00:28:40.334 "trsvcid": "4420", 00:28:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:40.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:40.334 "hdgst": false, 00:28:40.334 "ddgst": false 00:28:40.334 }, 00:28:40.334 "method": "bdev_nvme_attach_controller" 00:28:40.334 }' 00:28:40.592 [2024-11-20 19:06:02.667575] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:28:40.592 [2024-11-20 19:06:02.667625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828687 ] 00:28:40.592 [2024-11-20 19:06:02.743093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.592 [2024-11-20 19:06:02.782103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.851 Running I/O for 1 seconds... 00:28:42.045 2048.00 IOPS, 128.00 MiB/s 00:28:42.045 Latency(us) 00:28:42.045 [2024-11-20T18:06:04.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.045 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.045 Verification LBA range: start 0x0 length 0x400 00:28:42.045 Nvme0n1 : 1.01 2092.16 130.76 0.00 0.00 30093.59 6459.98 26838.55 00:28:42.045 [2024-11-20T18:06:04.370Z] =================================================================================================================== 00:28:42.045 [2024-11-20T18:06:04.370Z] Total : 2092.16 130.76 0.00 0.00 30093.59 6459.98 26838.55 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:42.045 rmmod nvme_tcp 00:28:42.045 rmmod nvme_fabrics 00:28:42.045 rmmod nvme_keyring 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3828045 ']' 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3828045 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3828045 ']' 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3828045 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.045 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3828045 00:28:42.305 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:42.305 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:42.305 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3828045' 00:28:42.305 killing process with pid 3828045 00:28:42.305 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3828045 00:28:42.305 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3828045 00:28:42.306 [2024-11-20 19:06:04.559457] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.306 19:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:44.843 00:28:44.843 real 0m12.810s 00:28:44.843 user 0m17.352s 00:28:44.843 sys 0m6.187s 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:44.843 ************************************ 00:28:44.843 END TEST nvmf_host_management 00:28:44.843 ************************************ 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:44.843 ************************************ 00:28:44.843 START TEST nvmf_lvol 00:28:44.843 ************************************ 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:44.843 * Looking for test storage... 00:28:44.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.843 --rc genhtml_branch_coverage=1 00:28:44.843 --rc genhtml_function_coverage=1 00:28:44.843 --rc genhtml_legend=1 00:28:44.843 --rc geninfo_all_blocks=1 00:28:44.843 --rc geninfo_unexecuted_blocks=1 00:28:44.843 00:28:44.843 ' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.843 --rc genhtml_branch_coverage=1 00:28:44.843 --rc genhtml_function_coverage=1 00:28:44.843 --rc genhtml_legend=1 00:28:44.843 --rc geninfo_all_blocks=1 00:28:44.843 --rc geninfo_unexecuted_blocks=1 00:28:44.843 00:28:44.843 ' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.843 --rc genhtml_branch_coverage=1 00:28:44.843 --rc genhtml_function_coverage=1 00:28:44.843 --rc genhtml_legend=1 00:28:44.843 --rc geninfo_all_blocks=1 00:28:44.843 --rc geninfo_unexecuted_blocks=1 00:28:44.843 00:28:44.843 ' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.843 --rc genhtml_branch_coverage=1 00:28:44.843 --rc genhtml_function_coverage=1 00:28:44.843 --rc genhtml_legend=1 00:28:44.843 --rc geninfo_all_blocks=1 00:28:44.843 --rc geninfo_unexecuted_blocks=1 00:28:44.843 00:28:44.843 ' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.843 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.844 19:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:51.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:51.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:51.414 Found net devices under 0000:86:00.0: cvl_0_0 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.414 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:51.415 Found net devices under 0000:86:00.1: cvl_0_1 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:28:51.415 00:28:51.415 --- 10.0.0.2 ping statistics --- 00:28:51.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.415 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:51.415 00:28:51.415 --- 10.0.0.1 ping statistics --- 00:28:51.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.415 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3832822 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3832822 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3832822 ']' 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.415 19:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:51.415 [2024-11-20 19:06:12.889097] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:51.415 [2024-11-20 19:06:12.890027] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:28:51.415 [2024-11-20 19:06:12.890063] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.415 [2024-11-20 19:06:12.970529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.415 [2024-11-20 19:06:13.012180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.415 [2024-11-20 19:06:13.012222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.415 [2024-11-20 19:06:13.012229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.415 [2024-11-20 19:06:13.012235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.415 [2024-11-20 19:06:13.012240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.415 [2024-11-20 19:06:13.013581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.415 [2024-11-20 19:06:13.013692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.415 [2024-11-20 19:06:13.013693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.415 [2024-11-20 19:06:13.082446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:51.415 [2024-11-20 19:06:13.083229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:51.415 [2024-11-20 19:06:13.083707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:51.415 [2024-11-20 19:06:13.083765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:51.415 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.415 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:51.415 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.415 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.415 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:51.675 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.675 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:51.675 [2024-11-20 19:06:13.942458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.675 19:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:51.934 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:51.934 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:52.193 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:52.193 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:52.452 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:52.710 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1eb9f44e-93a6-41b9-8da5-58818d13d4c0 00:28:52.710 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1eb9f44e-93a6-41b9-8da5-58818d13d4c0 lvol 20 00:28:52.710 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9fc35ead-2ac3-4f15-9cf9-1f6f88fb2aef 00:28:52.710 19:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:52.967 19:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9fc35ead-2ac3-4f15-9cf9-1f6f88fb2aef 00:28:53.225 19:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:53.225 [2024-11-20 19:06:15.518364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.225 19:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:53.483 19:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:53.483 19:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3833315 00:28:53.483 19:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:54.855 19:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9fc35ead-2ac3-4f15-9cf9-1f6f88fb2aef MY_SNAPSHOT 00:28:54.855 19:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5f11b63b-ccff-4433-9621-79a916ac9dbc 00:28:54.855 19:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9fc35ead-2ac3-4f15-9cf9-1f6f88fb2aef 30 00:28:55.113 19:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5f11b63b-ccff-4433-9621-79a916ac9dbc MY_CLONE 00:28:55.371 19:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=266becf9-f979-4669-a5dd-a21752882b66 00:28:55.371 19:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 266becf9-f979-4669-a5dd-a21752882b66 00:28:55.629 19:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3833315 00:29:05.598 Initializing NVMe Controllers 00:29:05.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:05.598 Controller IO queue size 128, less than required. 00:29:05.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:05.598 Initialization complete. Launching workers. 00:29:05.598 ======================================================== 00:29:05.598 Latency(us) 00:29:05.598 Device Information : IOPS MiB/s Average min max 00:29:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12242.57 47.82 10456.23 364.26 45017.68 00:29:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12071.87 47.16 10602.87 2106.06 49391.69 00:29:05.598 ======================================================== 00:29:05.598 Total : 24314.43 94.98 10529.04 364.26 49391.69 00:29:05.598 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9fc35ead-2ac3-4f15-9cf9-1f6f88fb2aef 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1eb9f44e-93a6-41b9-8da5-58818d13d4c0 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.598 rmmod nvme_tcp 00:29:05.598 rmmod nvme_fabrics 00:29:05.598 rmmod nvme_keyring 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3832822 ']' 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3832822 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3832822 ']' 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3832822 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3832822 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3832822' 00:29:05.598 killing process with pid 3832822 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3832822 00:29:05.598 19:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3832822 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.598 19:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:06.978 00:29:06.978 real 0m22.348s 00:29:06.978 user 0m55.545s 00:29:06.978 sys 0m9.767s 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:06.978 ************************************ 00:29:06.978 END TEST nvmf_lvol 00:29:06.978 ************************************ 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:06.978 ************************************ 00:29:06.978 START TEST nvmf_lvs_grow 00:29:06.978 ************************************ 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:06.978 * Looking for test storage... 00:29:06.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:06.978 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:07.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.239 --rc genhtml_branch_coverage=1 00:29:07.239 --rc genhtml_function_coverage=1 00:29:07.239 --rc genhtml_legend=1 00:29:07.239 --rc geninfo_all_blocks=1 00:29:07.239 --rc geninfo_unexecuted_blocks=1 00:29:07.239 00:29:07.239 ' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:07.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.239 --rc genhtml_branch_coverage=1 00:29:07.239 --rc genhtml_function_coverage=1 00:29:07.239 --rc genhtml_legend=1 00:29:07.239 --rc geninfo_all_blocks=1 00:29:07.239 --rc geninfo_unexecuted_blocks=1 00:29:07.239 00:29:07.239 ' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:07.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.239 --rc genhtml_branch_coverage=1 00:29:07.239 --rc genhtml_function_coverage=1 00:29:07.239 --rc genhtml_legend=1 00:29:07.239 --rc geninfo_all_blocks=1 00:29:07.239 --rc geninfo_unexecuted_blocks=1 00:29:07.239 00:29:07.239 ' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:07.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.239 --rc genhtml_branch_coverage=1 00:29:07.239 --rc genhtml_function_coverage=1 00:29:07.239 --rc genhtml_legend=1 00:29:07.239 --rc geninfo_all_blocks=1 00:29:07.239 --rc geninfo_unexecuted_blocks=1 00:29:07.239 00:29:07.239 ' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.239 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.240 19:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:13.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:13.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:13.955 Found net devices under 0000:86:00.0: cvl_0_0 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:13.955 Found net devices under 0000:86:00.1: cvl_0_1 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.955 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.956 19:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:29:13.956 00:29:13.956 --- 10.0.0.2 ping statistics --- 00:29:13.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.956 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:13.956 00:29:13.956 --- 10.0.0.1 ping statistics --- 00:29:13.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.956 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3838560 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3838560 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3838560 ']' 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:13.956 [2024-11-20 19:06:35.351989] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:13.956 [2024-11-20 19:06:35.352960] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:13.956 [2024-11-20 19:06:35.353001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.956 [2024-11-20 19:06:35.433834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.956 [2024-11-20 19:06:35.476016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.956 [2024-11-20 19:06:35.476045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.956 [2024-11-20 19:06:35.476053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.956 [2024-11-20 19:06:35.476059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.956 [2024-11-20 19:06:35.476064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.956 [2024-11-20 19:06:35.476492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.956 [2024-11-20 19:06:35.544807] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:13.956 [2024-11-20 19:06:35.545016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:13.956 [2024-11-20 19:06:35.781123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:13.956 ************************************ 00:29:13.956 START TEST lvs_grow_clean 00:29:13.956 ************************************ 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:13.956 19:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:13.956 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:13.956 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:13.956 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:13.956 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:13.956 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:14.216 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:14.216 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:14.216 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 lvol 150 00:29:14.474 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c38bbff4-48b6-401d-8532-411fa8e3d011 00:29:14.474 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:14.474 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:14.733 [2024-11-20 19:06:36.840869] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:14.733 [2024-11-20 19:06:36.840993] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:14.733 true 00:29:14.733 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:14.733 19:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:14.992 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:14.992 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:14.992 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c38bbff4-48b6-401d-8532-411fa8e3d011 00:29:15.251 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.511 [2024-11-20 19:06:37.629431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.511 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3838959 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3838959 /var/tmp/bdevperf.sock 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3838959 ']' 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:15.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.770 19:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:15.770 [2024-11-20 19:06:37.904074] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:15.770 [2024-11-20 19:06:37.904120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838959 ] 00:29:15.770 [2024-11-20 19:06:37.977892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.770 [2024-11-20 19:06:38.022432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.708 19:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.708 19:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:16.708 19:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:16.966 Nvme0n1 00:29:16.966 19:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:16.966 [ 00:29:16.966 { 00:29:16.966 "name": "Nvme0n1", 00:29:16.966 "aliases": [ 00:29:16.967 "c38bbff4-48b6-401d-8532-411fa8e3d011" 00:29:16.967 ], 00:29:16.967 "product_name": "NVMe disk", 00:29:16.967 "block_size": 4096, 00:29:16.967 "num_blocks": 38912, 00:29:16.967 "uuid": "c38bbff4-48b6-401d-8532-411fa8e3d011", 00:29:16.967 "numa_id": 1, 00:29:16.967 "assigned_rate_limits": { 00:29:16.967 "rw_ios_per_sec": 0, 00:29:16.967 "rw_mbytes_per_sec": 0, 00:29:16.967 "r_mbytes_per_sec": 0, 00:29:16.967 "w_mbytes_per_sec": 0 00:29:16.967 }, 00:29:16.967 "claimed": false, 00:29:16.967 "zoned": false, 00:29:16.967 "supported_io_types": { 00:29:16.967 "read": true, 00:29:16.967 "write": true, 00:29:16.967 "unmap": true, 00:29:16.967 "flush": true, 00:29:16.967 "reset": true, 00:29:16.967 "nvme_admin": true, 00:29:16.967 "nvme_io": true, 00:29:16.967 "nvme_io_md": false, 00:29:16.967 "write_zeroes": true, 00:29:16.967 "zcopy": false, 00:29:16.967 "get_zone_info": false, 00:29:16.967 "zone_management": false, 00:29:16.967 "zone_append": false, 00:29:16.967 "compare": true, 00:29:16.967 "compare_and_write": true, 00:29:16.967 "abort": true, 00:29:16.967 "seek_hole": false, 00:29:16.967 "seek_data": false, 00:29:16.967 "copy": true, 00:29:16.967 "nvme_iov_md": false 00:29:16.967 }, 00:29:16.967 "memory_domains": [ 00:29:16.967 { 00:29:16.967 "dma_device_id": "system", 00:29:16.967 "dma_device_type": 1 00:29:16.967 } 00:29:16.967 ], 00:29:16.967 "driver_specific": { 00:29:16.967 "nvme": [ 00:29:16.967 { 00:29:16.967 "trid": { 00:29:16.967 "trtype": "TCP", 00:29:16.967 "adrfam": "IPv4", 00:29:16.967 "traddr": "10.0.0.2", 00:29:16.967 "trsvcid": "4420", 00:29:16.967 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.967 }, 00:29:16.967 "ctrlr_data": { 00:29:16.967 "cntlid": 1, 00:29:16.967 "vendor_id": "0x8086", 00:29:16.967 "model_number": "SPDK bdev Controller", 00:29:16.967 "serial_number": "SPDK0", 00:29:16.967 "firmware_revision": "25.01", 00:29:16.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.967 "oacs": { 00:29:16.967 "security": 0, 00:29:16.967 "format": 0, 00:29:16.967 "firmware": 0, 00:29:16.967 "ns_manage": 0 00:29:16.967 }, 00:29:16.967 "multi_ctrlr": true, 00:29:16.967 "ana_reporting": false 00:29:16.967 }, 00:29:16.967 "vs": { 00:29:16.967 "nvme_version": "1.3" 00:29:16.967 }, 00:29:16.967 "ns_data": { 00:29:16.967 "id": 1, 00:29:16.967 "can_share": true 00:29:16.967 } 00:29:16.967 } 00:29:16.967 ], 00:29:16.967 "mp_policy": "active_passive" 00:29:16.967 } 00:29:16.967 } 00:29:16.967 ] 00:29:17.224 19:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3839189 00:29:17.224 19:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:17.224 19:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:17.224 Running I/O for 10 seconds... 00:29:18.158 Latency(us) 00:29:18.158 [2024-11-20T18:06:40.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.158 Nvme0n1 : 1.00 21787.00 85.11 0.00 0.00 0.00 0.00 0.00 00:29:18.158 [2024-11-20T18:06:40.483Z] =================================================================================================================== 00:29:18.158 [2024-11-20T18:06:40.483Z] Total : 21787.00 85.11 0.00 0.00 0.00 0.00 0.00 00:29:18.158 00:29:19.092 19:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:19.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.092 Nvme0n1 : 2.00 22149.50 86.52 0.00 0.00 0.00 0.00 0.00 00:29:19.092 [2024-11-20T18:06:41.417Z] =================================================================================================================== 00:29:19.092 [2024-11-20T18:06:41.417Z] Total : 22149.50 86.52 0.00 0.00 0.00 0.00 0.00 00:29:19.092 00:29:19.350 true 00:29:19.350 19:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:19.350 19:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:19.350 19:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:19.350 19:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:19.350 19:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3839189 00:29:20.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.282 Nvme0n1 : 3.00 22281.00 87.04 0.00 0.00 0.00 0.00 0.00 00:29:20.282 [2024-11-20T18:06:42.607Z] =================================================================================================================== 00:29:20.282 [2024-11-20T18:06:42.607Z] Total : 22281.00 87.04 0.00 0.00 0.00 0.00 0.00 00:29:20.282 00:29:21.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.215 Nvme0n1 : 4.00 22394.75 87.48 0.00 0.00 0.00 0.00 0.00 00:29:21.215 [2024-11-20T18:06:43.540Z] =================================================================================================================== 00:29:21.215 [2024-11-20T18:06:43.540Z] Total : 22394.75 87.48 0.00 0.00 0.00 0.00 0.00 00:29:21.215 00:29:22.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.148 Nvme0n1 : 5.00 22472.60 87.78 0.00 0.00 0.00 0.00 0.00 00:29:22.148 [2024-11-20T18:06:44.473Z] =================================================================================================================== 00:29:22.148 [2024-11-20T18:06:44.473Z] Total : 22472.60 87.78 0.00 0.00 0.00 0.00 0.00 00:29:22.148 00:29:23.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.081 Nvme0n1 : 6.00 22532.50 88.02 0.00 0.00 0.00 0.00 0.00 00:29:23.082 [2024-11-20T18:06:45.407Z] =================================================================================================================== 00:29:23.082 [2024-11-20T18:06:45.407Z] Total : 22532.50 88.02 0.00 0.00 0.00 0.00 0.00 00:29:23.082 00:29:24.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.455 Nvme0n1 : 7.00 22579.86 88.20 0.00 0.00 0.00 0.00 0.00 00:29:24.455 [2024-11-20T18:06:46.780Z] =================================================================================================================== 00:29:24.455 [2024-11-20T18:06:46.780Z] Total : 22579.86 88.20 0.00 0.00 0.00 0.00 0.00 00:29:24.455 00:29:25.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.387 Nvme0n1 : 8.00 22611.38 88.33 0.00 0.00 0.00 0.00 0.00 00:29:25.387 [2024-11-20T18:06:47.712Z] =================================================================================================================== 00:29:25.387 [2024-11-20T18:06:47.712Z] Total : 22611.38 88.33 0.00 0.00 0.00 0.00 0.00 00:29:25.387 00:29:26.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.321 Nvme0n1 : 9.00 22639.44 88.44 0.00 0.00 0.00 0.00 0.00 00:29:26.321 [2024-11-20T18:06:48.646Z] =================================================================================================================== 00:29:26.321 [2024-11-20T18:06:48.646Z] Total : 22639.44 88.44 0.00 0.00 0.00 0.00 0.00 00:29:26.321 00:29:27.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.256 Nvme0n1 : 10.00 22666.70 88.54 0.00 0.00 0.00 0.00 0.00 00:29:27.256 [2024-11-20T18:06:49.581Z] =================================================================================================================== 00:29:27.256 [2024-11-20T18:06:49.581Z] Total : 22666.70 88.54 0.00 0.00 0.00 0.00 0.00 00:29:27.256 00:29:27.256 00:29:27.256 Latency(us) 00:29:27.256 [2024-11-20T18:06:49.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.256 Nvme0n1 : 10.01 22666.98 88.54 0.00 0.00 5643.16 2855.50 20597.03 00:29:27.256 [2024-11-20T18:06:49.581Z] =================================================================================================================== 00:29:27.256 [2024-11-20T18:06:49.581Z] Total : 22666.98 88.54 0.00 0.00 5643.16 2855.50 20597.03 00:29:27.256 { 00:29:27.256 "results": [ 00:29:27.256 { 00:29:27.256 "job": "Nvme0n1", 00:29:27.256 "core_mask": "0x2", 00:29:27.256 "workload": "randwrite", 00:29:27.256 "status": "finished", 00:29:27.256 "queue_depth": 128, 00:29:27.256 "io_size": 4096, 00:29:27.256 "runtime": 10.005525, 00:29:27.256 "iops": 22666.976495486244, 00:29:27.256 "mibps": 88.54287693549314, 00:29:27.256 "io_failed": 0, 00:29:27.256 "io_timeout": 0, 00:29:27.256 "avg_latency_us": 5643.157189078872, 00:29:27.256 "min_latency_us": 2855.497142857143, 00:29:27.256 "max_latency_us": 20597.02857142857 00:29:27.256 } 00:29:27.256 ], 00:29:27.256 "core_count": 1 00:29:27.256 } 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3838959 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3838959 ']' 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3838959 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3838959 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3838959' 00:29:27.256 killing process with pid 3838959 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3838959 00:29:27.256 Received shutdown signal, test time was about 10.000000 seconds 00:29:27.256 00:29:27.256 Latency(us) 00:29:27.256 [2024-11-20T18:06:49.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.256 [2024-11-20T18:06:49.581Z] =================================================================================================================== 00:29:27.256 [2024-11-20T18:06:49.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.256 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3838959 00:29:27.515 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:27.773 19:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:27.773 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:27.774 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:28.031 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:28.031 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:28.032 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:28.290 [2024-11-20 19:06:50.416947] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:28.290 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:28.548 request: 00:29:28.548 { 00:29:28.548 "uuid": "77b0892c-f7ba-45c6-a0dc-24549c0b6476", 00:29:28.548 "method": "bdev_lvol_get_lvstores", 00:29:28.549 "req_id": 1 00:29:28.549 } 00:29:28.549 Got JSON-RPC error response 00:29:28.549 response: 00:29:28.549 { 00:29:28.549 "code": -19, 00:29:28.549 "message": "No such device" 00:29:28.549 } 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:28.549 aio_bdev 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c38bbff4-48b6-401d-8532-411fa8e3d011 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c38bbff4-48b6-401d-8532-411fa8e3d011 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:28.549 19:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:28.808 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c38bbff4-48b6-401d-8532-411fa8e3d011 -t 2000 00:29:29.067 [ 00:29:29.067 { 00:29:29.067 "name": "c38bbff4-48b6-401d-8532-411fa8e3d011", 00:29:29.067 "aliases": [ 00:29:29.067 "lvs/lvol" 00:29:29.067 ], 00:29:29.067 "product_name": "Logical Volume", 00:29:29.067 "block_size": 4096, 00:29:29.067 "num_blocks": 38912, 00:29:29.067 "uuid": "c38bbff4-48b6-401d-8532-411fa8e3d011", 00:29:29.067 "assigned_rate_limits": { 00:29:29.067 "rw_ios_per_sec": 0, 00:29:29.067 "rw_mbytes_per_sec": 0, 00:29:29.067 "r_mbytes_per_sec": 0, 00:29:29.067 "w_mbytes_per_sec": 0 00:29:29.067 }, 00:29:29.067 "claimed": false, 00:29:29.067 "zoned": false, 00:29:29.067 "supported_io_types": { 00:29:29.067 "read": true, 00:29:29.067 "write": true, 00:29:29.067 "unmap": true, 00:29:29.067 "flush": false, 00:29:29.067 "reset": true, 00:29:29.067 "nvme_admin": false, 00:29:29.067 "nvme_io": false, 00:29:29.067 "nvme_io_md": false, 00:29:29.067 "write_zeroes": true, 00:29:29.067 "zcopy": false, 00:29:29.067 "get_zone_info": false, 00:29:29.067 "zone_management": false, 00:29:29.067 "zone_append": false, 00:29:29.067 "compare": false, 00:29:29.067 "compare_and_write": false, 00:29:29.067 "abort": false, 00:29:29.067 "seek_hole": true, 00:29:29.067 "seek_data": true, 00:29:29.067 "copy": false, 00:29:29.067 "nvme_iov_md": false 00:29:29.067 }, 00:29:29.067 "driver_specific": { 00:29:29.067 "lvol": { 00:29:29.067 "lvol_store_uuid": "77b0892c-f7ba-45c6-a0dc-24549c0b6476", 00:29:29.067 "base_bdev": "aio_bdev", 00:29:29.067 "thin_provision": false, 00:29:29.067 "num_allocated_clusters": 38, 00:29:29.067 "snapshot": false, 00:29:29.067 "clone": false, 00:29:29.067 "esnap_clone": false 00:29:29.067 } 00:29:29.067 } 00:29:29.067 } 00:29:29.067 ] 00:29:29.067 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:29.067 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:29.067 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:29.326 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:29.326 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:29.326 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:29.584 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:29.584 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c38bbff4-48b6-401d-8532-411fa8e3d011 00:29:29.584 19:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77b0892c-f7ba-45c6-a0dc-24549c0b6476 00:29:29.843 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.102 00:29:30.102 real 0m16.411s 00:29:30.102 user 0m16.032s 00:29:30.102 sys 0m1.535s 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.102 ************************************ 00:29:30.102 END TEST lvs_grow_clean 00:29:30.102 ************************************ 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.102 ************************************ 00:29:30.102 START TEST lvs_grow_dirty 00:29:30.102 ************************************ 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.102 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:30.361 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:30.361 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:30.620 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:30.620 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:30.620 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:30.620 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:30.620 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:30.620 19:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5785a72e-6276-4a3e-a230-3cf2c398b49f lvol 150 00:29:30.878 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7424e55e-dbd6-4c2c-9f36-1853baec1714 00:29:30.878 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.878 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:31.137 [2024-11-20 19:06:53.308880] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:31.137 [2024-11-20 19:06:53.309011] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:31.137 true 00:29:31.137 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:31.137 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:31.396 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:31.396 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:31.396 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7424e55e-dbd6-4c2c-9f36-1853baec1714 00:29:31.655 19:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:31.915 [2024-11-20 19:06:54.041344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.915 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3841758 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3841758 /var/tmp/bdevperf.sock 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3841758 ']' 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.175 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:32.175 [2024-11-20 19:06:54.300841] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:32.175 [2024-11-20 19:06:54.300888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841758 ] 00:29:32.175 [2024-11-20 19:06:54.373618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.175 [2024-11-20 19:06:54.415431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.434 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.434 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:32.434 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:32.693 Nvme0n1 00:29:32.693 19:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:32.952 [ 00:29:32.952 { 00:29:32.952 "name": "Nvme0n1", 00:29:32.952 "aliases": [ 00:29:32.952 "7424e55e-dbd6-4c2c-9f36-1853baec1714" 00:29:32.952 ], 00:29:32.952 "product_name": "NVMe disk", 00:29:32.952 "block_size": 4096, 00:29:32.952 "num_blocks": 38912, 00:29:32.952 "uuid": "7424e55e-dbd6-4c2c-9f36-1853baec1714", 00:29:32.952 "numa_id": 1, 00:29:32.952 "assigned_rate_limits": { 00:29:32.952 "rw_ios_per_sec": 0, 00:29:32.952 "rw_mbytes_per_sec": 0, 00:29:32.952 "r_mbytes_per_sec": 0, 00:29:32.952 "w_mbytes_per_sec": 0 00:29:32.952 }, 00:29:32.952 "claimed": false, 00:29:32.952 "zoned": false, 00:29:32.952 "supported_io_types": { 00:29:32.952 "read": true, 00:29:32.952 "write": true, 00:29:32.952 "unmap": true, 00:29:32.952 "flush": true, 00:29:32.952 "reset": true, 00:29:32.952 "nvme_admin": true, 00:29:32.952 "nvme_io": true, 00:29:32.952 "nvme_io_md": false, 00:29:32.952 "write_zeroes": true, 00:29:32.952 "zcopy": false, 00:29:32.952 "get_zone_info": false, 00:29:32.952 "zone_management": false, 00:29:32.952 "zone_append": false, 00:29:32.952 "compare": true, 00:29:32.952 "compare_and_write": true, 00:29:32.952 "abort": true, 00:29:32.952 "seek_hole": false, 00:29:32.952 "seek_data": false, 00:29:32.952 "copy": true, 00:29:32.952 "nvme_iov_md": false 00:29:32.952 }, 00:29:32.952 "memory_domains": [ 00:29:32.952 { 00:29:32.952 "dma_device_id": "system", 00:29:32.952 "dma_device_type": 1 00:29:32.952 } 00:29:32.952 ], 00:29:32.952 "driver_specific": { 00:29:32.952 "nvme": [ 00:29:32.952 { 00:29:32.952 "trid": { 00:29:32.952 "trtype": "TCP", 00:29:32.952 "adrfam": "IPv4", 00:29:32.952 "traddr": "10.0.0.2", 00:29:32.952 "trsvcid": "4420", 00:29:32.952 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:32.952 }, 00:29:32.952 "ctrlr_data": { 00:29:32.952 "cntlid": 1, 00:29:32.952 "vendor_id": "0x8086", 00:29:32.952 "model_number": "SPDK bdev Controller", 00:29:32.952 "serial_number": "SPDK0", 00:29:32.952 "firmware_revision": "25.01", 00:29:32.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.952 "oacs": { 00:29:32.952 "security": 0, 00:29:32.952 "format": 0, 00:29:32.952 "firmware": 0, 00:29:32.952 "ns_manage": 0 00:29:32.952 }, 00:29:32.952 "multi_ctrlr": true, 00:29:32.952 "ana_reporting": false 00:29:32.952 }, 00:29:32.952 "vs": { 00:29:32.952 "nvme_version": "1.3" 00:29:32.952 }, 00:29:32.952 "ns_data": { 00:29:32.952 "id": 1, 00:29:32.952 "can_share": true 00:29:32.952 } 00:29:32.952 } 00:29:32.952 ], 00:29:32.952 "mp_policy": "active_passive" 00:29:32.952 } 00:29:32.952 } 00:29:32.952 ] 00:29:32.952 19:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3841775 00:29:32.953 19:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:32.953 19:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:32.953 Running I/O for 10 seconds... 00:29:33.887 Latency(us) 00:29:33.887 [2024-11-20T18:06:56.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.887 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:33.887 [2024-11-20T18:06:56.212Z] =================================================================================================================== 00:29:33.887 [2024-11-20T18:06:56.212Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:33.887 00:29:34.823 19:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:35.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.081 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:29:35.081 [2024-11-20T18:06:57.406Z] =================================================================================================================== 00:29:35.081 [2024-11-20T18:06:57.406Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:29:35.081 00:29:35.081 true 00:29:35.081 19:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:35.081 19:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:35.339 19:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:35.339 19:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:35.339 19:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3841775 00:29:35.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.906 Nvme0n1 : 3.00 23177.67 90.54 0.00 0.00 0.00 0.00 0.00 00:29:35.906 [2024-11-20T18:06:58.231Z] =================================================================================================================== 00:29:35.906 [2024-11-20T18:06:58.231Z] Total : 23177.67 90.54 0.00 0.00 0.00 0.00 0.00 00:29:35.906 00:29:36.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.840 Nvme0n1 : 4.00 23281.25 90.94 0.00 0.00 0.00 0.00 0.00 00:29:36.840 [2024-11-20T18:06:59.165Z] =================================================================================================================== 00:29:36.840 [2024-11-20T18:06:59.165Z] Total : 23281.25 90.94 0.00 0.00 0.00 0.00 0.00 00:29:36.840 00:29:38.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.216 Nvme0n1 : 5.00 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:29:38.216 [2024-11-20T18:07:00.541Z] =================================================================================================================== 00:29:38.216 [2024-11-20T18:07:00.541Z] Total : 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:29:38.216 00:29:39.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.150 Nvme0n1 : 6.00 23373.67 91.30 0.00 0.00 0.00 0.00 0.00 00:29:39.150 [2024-11-20T18:07:01.475Z] =================================================================================================================== 00:29:39.150 [2024-11-20T18:07:01.475Z] Total : 23373.67 91.30 0.00 0.00 0.00 0.00 0.00 00:29:39.150 00:29:40.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.084 Nvme0n1 : 7.00 23427.29 91.51 0.00 0.00 0.00 0.00 0.00 00:29:40.084 [2024-11-20T18:07:02.409Z] =================================================================================================================== 00:29:40.084 [2024-11-20T18:07:02.409Z] Total : 23427.29 91.51 0.00 0.00 0.00 0.00 0.00 00:29:40.084 00:29:41.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.018 Nvme0n1 : 8.00 23451.62 91.61 0.00 0.00 0.00 0.00 0.00 00:29:41.018 [2024-11-20T18:07:03.343Z] =================================================================================================================== 00:29:41.018 [2024-11-20T18:07:03.343Z] Total : 23451.62 91.61 0.00 0.00 0.00 0.00 0.00 00:29:41.018 00:29:41.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.952 Nvme0n1 : 9.00 23484.67 91.74 0.00 0.00 0.00 0.00 0.00 00:29:41.952 [2024-11-20T18:07:04.277Z] =================================================================================================================== 00:29:41.952 [2024-11-20T18:07:04.277Z] Total : 23484.67 91.74 0.00 0.00 0.00 0.00 0.00 00:29:41.952 00:29:42.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.886 Nvme0n1 : 10.00 23504.80 91.82 0.00 0.00 0.00 0.00 0.00 00:29:42.886 [2024-11-20T18:07:05.211Z] =================================================================================================================== 00:29:42.886 [2024-11-20T18:07:05.211Z] Total : 23504.80 91.82 0.00 0.00 0.00 0.00 0.00 00:29:42.886 00:29:42.886 00:29:42.886 Latency(us) 00:29:42.886 [2024-11-20T18:07:05.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.886 Nvme0n1 : 10.00 23502.96 91.81 0.00 0.00 5442.86 3339.22 27587.54 00:29:42.886 [2024-11-20T18:07:05.211Z] =================================================================================================================== 00:29:42.886 [2024-11-20T18:07:05.211Z] Total : 23502.96 91.81 0.00 0.00 5442.86 3339.22 27587.54 00:29:42.886 { 00:29:42.886 "results": [ 00:29:42.886 { 00:29:42.886 "job": "Nvme0n1", 00:29:42.886 "core_mask": "0x2", 00:29:42.886 "workload": "randwrite", 00:29:42.886 "status": "finished", 00:29:42.886 "queue_depth": 128, 00:29:42.886 "io_size": 4096, 00:29:42.886 "runtime": 10.003508, 00:29:42.886 "iops": 23502.955163328705, 00:29:42.886 "mibps": 91.80841860675275, 00:29:42.886 "io_failed": 0, 00:29:42.886 "io_timeout": 0, 00:29:42.886 "avg_latency_us": 5442.863673061996, 00:29:42.886 "min_latency_us": 3339.215238095238, 00:29:42.886 "max_latency_us": 27587.53523809524 00:29:42.886 } 00:29:42.886 ], 00:29:42.886 "core_count": 1 00:29:42.886 } 00:29:42.887 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3841758 00:29:42.887 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3841758 ']' 00:29:42.887 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3841758 00:29:42.887 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:42.887 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.887 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841758 00:29:43.146 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.146 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.146 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841758' 00:29:43.146 killing process with pid 3841758 00:29:43.146 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3841758 00:29:43.146 Received shutdown signal, test time was about 10.000000 seconds 00:29:43.146 00:29:43.146 Latency(us) 00:29:43.146 [2024-11-20T18:07:05.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.146 [2024-11-20T18:07:05.471Z] =================================================================================================================== 00:29:43.146 [2024-11-20T18:07:05.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.146 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3841758 00:29:43.146 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.406 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:43.664 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:43.664 19:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3838560 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3838560 00:29:43.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3838560 Killed "${NVMF_APP[@]}" "$@" 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3843602 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3843602 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3843602 ']' 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.923 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:43.923 [2024-11-20 19:07:06.092220] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:43.923 [2024-11-20 19:07:06.093104] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:43.923 [2024-11-20 19:07:06.093142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.923 [2024-11-20 19:07:06.172275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.923 [2024-11-20 19:07:06.212288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.923 [2024-11-20 19:07:06.212322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.923 [2024-11-20 19:07:06.212329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.924 [2024-11-20 19:07:06.212335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.924 [2024-11-20 19:07:06.212340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.924 [2024-11-20 19:07:06.212900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.183 [2024-11-20 19:07:06.281619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:44.183 [2024-11-20 19:07:06.281823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:44.183 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.183 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:44.183 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.183 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.183 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:44.183 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.183 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:44.443 [2024-11-20 19:07:06.514344] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:44.443 [2024-11-20 19:07:06.514536] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:44.443 [2024-11-20 19:07:06.514620] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7424e55e-dbd6-4c2c-9f36-1853baec1714 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7424e55e-dbd6-4c2c-9f36-1853baec1714 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:44.443 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7424e55e-dbd6-4c2c-9f36-1853baec1714 -t 2000 00:29:44.702 [ 00:29:44.702 { 00:29:44.702 "name": "7424e55e-dbd6-4c2c-9f36-1853baec1714", 00:29:44.702 "aliases": [ 00:29:44.702 "lvs/lvol" 00:29:44.702 ], 00:29:44.702 "product_name": "Logical Volume", 00:29:44.702 "block_size": 4096, 00:29:44.702 "num_blocks": 38912, 00:29:44.702 "uuid": "7424e55e-dbd6-4c2c-9f36-1853baec1714", 00:29:44.702 "assigned_rate_limits": { 00:29:44.702 "rw_ios_per_sec": 0, 00:29:44.702 "rw_mbytes_per_sec": 0, 00:29:44.702 "r_mbytes_per_sec": 0, 00:29:44.702 "w_mbytes_per_sec": 0 00:29:44.702 }, 00:29:44.702 "claimed": false, 00:29:44.702 "zoned": false, 00:29:44.702 "supported_io_types": { 00:29:44.702 "read": true, 00:29:44.702 "write": true, 00:29:44.702 "unmap": true, 00:29:44.702 "flush": false, 00:29:44.702 "reset": true, 00:29:44.702 "nvme_admin": false, 00:29:44.702 "nvme_io": false, 00:29:44.702 "nvme_io_md": false, 00:29:44.702 "write_zeroes": true, 00:29:44.702 "zcopy": false, 00:29:44.702 "get_zone_info": false, 00:29:44.702 "zone_management": false, 00:29:44.702 "zone_append": false, 00:29:44.702 "compare": false, 00:29:44.702 "compare_and_write": false, 00:29:44.702 "abort": false, 00:29:44.702 "seek_hole": true, 00:29:44.702 "seek_data": true, 00:29:44.702 "copy": false, 00:29:44.702 "nvme_iov_md": false 00:29:44.702 }, 00:29:44.702 "driver_specific": { 00:29:44.702 "lvol": { 00:29:44.702 "lvol_store_uuid": "5785a72e-6276-4a3e-a230-3cf2c398b49f", 00:29:44.702 "base_bdev": "aio_bdev", 00:29:44.702 "thin_provision": false, 00:29:44.702 "num_allocated_clusters": 38, 00:29:44.702 "snapshot": false, 00:29:44.702 "clone": false, 00:29:44.702 "esnap_clone": false 00:29:44.702 } 00:29:44.702 } 00:29:44.702 } 00:29:44.702 ] 00:29:44.702 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:44.702 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:44.702 19:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:44.963 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:44.963 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:44.963 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:45.222 [2024-11-20 19:07:07.497373] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.222 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.223 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.223 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.223 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.223 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.223 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:45.223 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:45.482 request: 00:29:45.482 { 00:29:45.482 "uuid": "5785a72e-6276-4a3e-a230-3cf2c398b49f", 00:29:45.482 "method": "bdev_lvol_get_lvstores", 00:29:45.482 "req_id": 1 00:29:45.482 } 00:29:45.482 Got JSON-RPC error response 00:29:45.482 response: 00:29:45.482 { 00:29:45.482 "code": -19, 00:29:45.482 "message": "No such device" 00:29:45.482 } 00:29:45.482 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:45.482 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:45.482 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:45.482 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:45.482 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:45.741 aio_bdev 00:29:45.741 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7424e55e-dbd6-4c2c-9f36-1853baec1714 00:29:45.741 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7424e55e-dbd6-4c2c-9f36-1853baec1714 00:29:45.741 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:45.741 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:45.741 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:45.741 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:45.741 19:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:45.999 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7424e55e-dbd6-4c2c-9f36-1853baec1714 -t 2000 00:29:45.999 [ 00:29:45.999 { 00:29:45.999 "name": "7424e55e-dbd6-4c2c-9f36-1853baec1714", 00:29:45.999 "aliases": [ 00:29:45.999 "lvs/lvol" 00:29:45.999 ], 00:29:45.999 "product_name": "Logical Volume", 00:29:45.999 "block_size": 4096, 00:29:45.999 "num_blocks": 38912, 00:29:45.999 "uuid": "7424e55e-dbd6-4c2c-9f36-1853baec1714", 00:29:45.999 "assigned_rate_limits": { 00:29:45.999 "rw_ios_per_sec": 0, 00:29:45.999 "rw_mbytes_per_sec": 0, 00:29:45.999 "r_mbytes_per_sec": 0, 00:29:45.999 "w_mbytes_per_sec": 0 00:29:45.999 }, 00:29:45.999 "claimed": false, 00:29:45.999 "zoned": false, 00:29:45.999 "supported_io_types": { 00:29:45.999 "read": true, 00:29:45.999 "write": true, 00:29:45.999 "unmap": true, 00:29:45.999 "flush": false, 00:29:45.999 "reset": true, 00:29:45.999 "nvme_admin": false, 00:29:45.999 "nvme_io": false, 00:29:45.999 "nvme_io_md": false, 00:29:45.999 "write_zeroes": true, 00:29:45.999 "zcopy": false, 00:29:45.999 "get_zone_info": false, 00:29:45.999 "zone_management": false, 00:29:45.999 "zone_append": false, 00:29:45.999 "compare": false, 00:29:45.999 "compare_and_write": false, 00:29:45.999 "abort": false, 00:29:45.999 "seek_hole": true, 00:29:45.999 "seek_data": true, 00:29:45.999 "copy": false, 00:29:45.999 "nvme_iov_md": false 00:29:45.999 }, 00:29:45.999 "driver_specific": { 00:29:45.999 "lvol": { 00:29:45.999 "lvol_store_uuid": "5785a72e-6276-4a3e-a230-3cf2c398b49f", 00:29:45.999 "base_bdev": "aio_bdev", 00:29:45.999 "thin_provision": false, 00:29:45.999 "num_allocated_clusters": 38, 00:29:45.999 "snapshot": false, 00:29:45.999 "clone": false, 00:29:45.999 "esnap_clone": false 00:29:45.999 } 00:29:45.999 } 00:29:45.999 } 00:29:45.999 ] 00:29:45.999 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:45.999 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:45.999 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:46.258 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:46.258 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:46.258 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:46.516 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:46.516 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7424e55e-dbd6-4c2c-9f36-1853baec1714 00:29:46.774 19:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5785a72e-6276-4a3e-a230-3cf2c398b49f 00:29:47.034 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:47.034 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:47.034 00:29:47.034 real 0m17.007s 00:29:47.034 user 0m34.362s 00:29:47.034 sys 0m3.903s 00:29:47.034 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.034 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:47.034 ************************************ 00:29:47.034 END TEST lvs_grow_dirty 00:29:47.034 ************************************ 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:47.295 nvmf_trace.0 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.295 rmmod nvme_tcp 00:29:47.295 rmmod nvme_fabrics 00:29:47.295 rmmod nvme_keyring 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3843602 ']' 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3843602 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3843602 ']' 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3843602 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3843602 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3843602' 00:29:47.295 killing process with pid 3843602 00:29:47.295 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3843602 00:29:47.296 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3843602 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.555 19:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.472 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.472 00:29:49.472 real 0m42.617s 00:29:49.472 user 0m52.863s 00:29:49.472 sys 0m10.373s 00:29:49.472 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.472 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:49.472 ************************************ 00:29:49.472 END TEST nvmf_lvs_grow 00:29:49.472 ************************************ 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:49.732 ************************************ 00:29:49.732 START TEST nvmf_bdev_io_wait 00:29:49.732 ************************************ 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:49.732 * Looking for test storage... 00:29:49.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.732 19:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:49.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.732 --rc genhtml_branch_coverage=1 00:29:49.732 --rc genhtml_function_coverage=1 00:29:49.732 --rc genhtml_legend=1 00:29:49.732 --rc geninfo_all_blocks=1 00:29:49.732 --rc geninfo_unexecuted_blocks=1 00:29:49.732 00:29:49.732 ' 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:49.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.732 --rc genhtml_branch_coverage=1 00:29:49.732 --rc genhtml_function_coverage=1 00:29:49.732 --rc genhtml_legend=1 00:29:49.732 --rc geninfo_all_blocks=1 00:29:49.732 --rc geninfo_unexecuted_blocks=1 00:29:49.732 00:29:49.732 ' 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:49.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.732 --rc genhtml_branch_coverage=1 00:29:49.732 --rc genhtml_function_coverage=1 00:29:49.732 --rc genhtml_legend=1 00:29:49.732 --rc geninfo_all_blocks=1 00:29:49.732 --rc geninfo_unexecuted_blocks=1 00:29:49.732 00:29:49.732 ' 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:49.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.732 --rc genhtml_branch_coverage=1 00:29:49.732 --rc genhtml_function_coverage=1 00:29:49.732 --rc genhtml_legend=1 00:29:49.732 --rc geninfo_all_blocks=1 00:29:49.732 --rc geninfo_unexecuted_blocks=1 00:29:49.732 00:29:49.732 ' 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.732 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.733 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.992 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.992 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.992 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.992 19:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:56.561 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:56.561 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:56.561 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:56.562 Found net devices under 0000:86:00.0: cvl_0_0 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:56.562 Found net devices under 0000:86:00.1: cvl_0_1 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:29:56.562 00:29:56.562 --- 10.0.0.2 ping statistics --- 00:29:56.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.562 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:29:56.562 00:29:56.562 --- 10.0.0.1 ping statistics --- 00:29:56.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.562 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3847654 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3847654 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3847654 ']' 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.562 19:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.562 [2024-11-20 19:07:18.043546] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:56.562 [2024-11-20 19:07:18.044479] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:56.562 [2024-11-20 19:07:18.044511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.562 [2024-11-20 19:07:18.122988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.562 [2024-11-20 19:07:18.166119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.562 [2024-11-20 19:07:18.166155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.562 [2024-11-20 19:07:18.166162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.562 [2024-11-20 19:07:18.166170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.562 [2024-11-20 19:07:18.166175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.562 [2024-11-20 19:07:18.167754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.563 [2024-11-20 19:07:18.167861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.563 [2024-11-20 19:07:18.167966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.563 [2024-11-20 19:07:18.167967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.563 [2024-11-20 19:07:18.168300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 [2024-11-20 19:07:18.287922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:56.563 [2024-11-20 19:07:18.288381] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:56.563 [2024-11-20 19:07:18.288414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:56.563 [2024-11-20 19:07:18.288595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 [2024-11-20 19:07:18.300724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 Malloc0 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.563 [2024-11-20 19:07:18.372857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3847685 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3847687 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.563 { 00:29:56.563 "params": { 00:29:56.563 "name": "Nvme$subsystem", 00:29:56.563 "trtype": "$TEST_TRANSPORT", 00:29:56.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.563 "adrfam": "ipv4", 00:29:56.563 "trsvcid": "$NVMF_PORT", 00:29:56.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.563 "hdgst": ${hdgst:-false}, 00:29:56.563 "ddgst": ${ddgst:-false} 00:29:56.563 }, 00:29:56.563 "method": "bdev_nvme_attach_controller" 00:29:56.563 } 00:29:56.563 EOF 00:29:56.563 )") 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3847690 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3847693 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.563 { 00:29:56.563 "params": { 00:29:56.563 "name": "Nvme$subsystem", 00:29:56.563 "trtype": "$TEST_TRANSPORT", 00:29:56.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.563 "adrfam": "ipv4", 00:29:56.563 "trsvcid": "$NVMF_PORT", 00:29:56.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.563 "hdgst": ${hdgst:-false}, 00:29:56.563 "ddgst": ${ddgst:-false} 00:29:56.563 }, 00:29:56.563 "method": "bdev_nvme_attach_controller" 00:29:56.563 } 00:29:56.563 EOF 00:29:56.563 )") 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.563 { 00:29:56.563 "params": { 00:29:56.563 "name": "Nvme$subsystem", 00:29:56.563 "trtype": "$TEST_TRANSPORT", 00:29:56.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.563 "adrfam": "ipv4", 00:29:56.563 "trsvcid": "$NVMF_PORT", 00:29:56.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.563 "hdgst": ${hdgst:-false}, 00:29:56.563 "ddgst": ${ddgst:-false} 00:29:56.563 }, 00:29:56.563 "method": "bdev_nvme_attach_controller" 00:29:56.563 } 00:29:56.563 EOF 00:29:56.563 )") 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.563 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.563 { 00:29:56.563 "params": { 00:29:56.564 "name": "Nvme$subsystem", 00:29:56.564 "trtype": "$TEST_TRANSPORT", 00:29:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.564 "adrfam": "ipv4", 00:29:56.564 "trsvcid": "$NVMF_PORT", 00:29:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.564 "hdgst": ${hdgst:-false}, 00:29:56.564 "ddgst": ${ddgst:-false} 00:29:56.564 }, 00:29:56.564 "method": "bdev_nvme_attach_controller" 00:29:56.564 } 00:29:56.564 EOF 00:29:56.564 )") 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3847685 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.564 "params": { 00:29:56.564 "name": "Nvme1", 00:29:56.564 "trtype": "tcp", 00:29:56.564 "traddr": "10.0.0.2", 00:29:56.564 "adrfam": "ipv4", 00:29:56.564 "trsvcid": "4420", 00:29:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.564 "hdgst": false, 00:29:56.564 "ddgst": false 00:29:56.564 }, 00:29:56.564 "method": "bdev_nvme_attach_controller" 00:29:56.564 }' 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.564 "params": { 00:29:56.564 "name": "Nvme1", 00:29:56.564 "trtype": "tcp", 00:29:56.564 "traddr": "10.0.0.2", 00:29:56.564 "adrfam": "ipv4", 00:29:56.564 "trsvcid": "4420", 00:29:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.564 "hdgst": false, 00:29:56.564 "ddgst": false 00:29:56.564 }, 00:29:56.564 "method": "bdev_nvme_attach_controller" 00:29:56.564 }' 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.564 "params": { 00:29:56.564 "name": "Nvme1", 00:29:56.564 "trtype": "tcp", 00:29:56.564 "traddr": "10.0.0.2", 00:29:56.564 "adrfam": "ipv4", 00:29:56.564 "trsvcid": "4420", 00:29:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.564 "hdgst": false, 00:29:56.564 "ddgst": false 00:29:56.564 }, 00:29:56.564 "method": "bdev_nvme_attach_controller" 00:29:56.564 }' 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.564 19:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.564 "params": { 00:29:56.564 "name": "Nvme1", 00:29:56.564 "trtype": "tcp", 00:29:56.564 "traddr": "10.0.0.2", 00:29:56.564 "adrfam": "ipv4", 00:29:56.564 "trsvcid": "4420", 00:29:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.564 "hdgst": false, 00:29:56.564 "ddgst": false 00:29:56.564 }, 00:29:56.564 "method": "bdev_nvme_attach_controller" 00:29:56.564 }' 00:29:56.564 [2024-11-20 19:07:18.425748] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:56.564 [2024-11-20 19:07:18.425751] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:56.564 [2024-11-20 19:07:18.425752] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:56.564 [2024-11-20 19:07:18.425804] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 19:07:18.425804] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 19:07:18.425805] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:56.564 --proc-type=auto ] 00:29:56.564 --proc-type=auto ] 00:29:56.564 [2024-11-20 19:07:18.426091] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:29:56.564 [2024-11-20 19:07:18.426137] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:56.564 [2024-11-20 19:07:18.622136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.564 [2024-11-20 19:07:18.664573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:56.564 [2024-11-20 19:07:18.715122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.564 [2024-11-20 19:07:18.758447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:56.564 [2024-11-20 19:07:18.817981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.564 [2024-11-20 19:07:18.867136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:56.564 [2024-11-20 19:07:18.871865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.822 [2024-11-20 19:07:18.914174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:56.822 Running I/O for 1 seconds... 00:29:56.822 Running I/O for 1 seconds... 00:29:56.822 Running I/O for 1 seconds... 00:29:56.822 Running I/O for 1 seconds... 00:29:57.771 243224.00 IOPS, 950.09 MiB/s 00:29:57.771 Latency(us) 00:29:57.771 [2024-11-20T18:07:20.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.771 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:57.771 Nvme1n1 : 1.00 242808.54 948.47 0.00 0.00 524.32 243.81 1700.82 00:29:57.771 [2024-11-20T18:07:20.096Z] =================================================================================================================== 00:29:57.771 [2024-11-20T18:07:20.096Z] Total : 242808.54 948.47 0.00 0.00 524.32 243.81 1700.82 00:29:57.771 7932.00 IOPS, 30.98 MiB/s 00:29:57.771 Latency(us) 00:29:57.771 [2024-11-20T18:07:20.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.771 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:57.771 Nvme1n1 : 1.02 7923.92 30.95 0.00 0.00 16022.80 3183.18 21970.16 00:29:57.771 [2024-11-20T18:07:20.096Z] =================================================================================================================== 00:29:57.771 [2024-11-20T18:07:20.096Z] Total : 7923.92 30.95 0.00 0.00 16022.80 3183.18 21970.16 00:29:57.771 12149.00 IOPS, 47.46 MiB/s 00:29:57.771 Latency(us) 00:29:57.771 [2024-11-20T18:07:20.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.771 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:57.771 Nvme1n1 : 1.01 12210.85 47.70 0.00 0.00 10448.89 4774.77 15042.07 00:29:57.771 [2024-11-20T18:07:20.096Z] =================================================================================================================== 00:29:57.771 [2024-11-20T18:07:20.096Z] Total : 12210.85 47.70 0.00 0.00 10448.89 4774.77 15042.07 00:29:58.030 7898.00 IOPS, 30.85 MiB/s 00:29:58.030 Latency(us) 00:29:58.030 [2024-11-20T18:07:20.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.030 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:58.030 Nvme1n1 : 1.01 8034.41 31.38 0.00 0.00 15901.11 2715.06 31207.62 00:29:58.030 [2024-11-20T18:07:20.355Z] =================================================================================================================== 00:29:58.030 [2024-11-20T18:07:20.355Z] Total : 8034.41 31.38 0.00 0.00 15901.11 2715.06 31207.62 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3847687 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3847690 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3847693 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.030 rmmod nvme_tcp 00:29:58.030 rmmod nvme_fabrics 00:29:58.030 rmmod nvme_keyring 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3847654 ']' 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3847654 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3847654 ']' 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3847654 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.030 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3847654 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3847654' 00:29:58.290 killing process with pid 3847654 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3847654 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3847654 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.290 19:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.289 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:00.289 00:30:00.289 real 0m10.728s 00:30:00.289 user 0m14.825s 00:30:00.289 sys 0m6.528s 00:30:00.290 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.290 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:00.290 ************************************ 00:30:00.290 END TEST nvmf_bdev_io_wait 00:30:00.290 ************************************ 00:30:00.290 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:00.550 ************************************ 00:30:00.550 START TEST nvmf_queue_depth 00:30:00.550 ************************************ 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:00.550 * Looking for test storage... 00:30:00.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.550 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:00.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.550 --rc genhtml_branch_coverage=1 00:30:00.551 --rc genhtml_function_coverage=1 00:30:00.551 --rc genhtml_legend=1 00:30:00.551 --rc geninfo_all_blocks=1 00:30:00.551 --rc geninfo_unexecuted_blocks=1 00:30:00.551 00:30:00.551 ' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:00.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.551 --rc genhtml_branch_coverage=1 00:30:00.551 --rc genhtml_function_coverage=1 00:30:00.551 --rc genhtml_legend=1 00:30:00.551 --rc geninfo_all_blocks=1 00:30:00.551 --rc geninfo_unexecuted_blocks=1 00:30:00.551 00:30:00.551 ' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:00.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.551 --rc genhtml_branch_coverage=1 00:30:00.551 --rc genhtml_function_coverage=1 00:30:00.551 --rc genhtml_legend=1 00:30:00.551 --rc geninfo_all_blocks=1 00:30:00.551 --rc geninfo_unexecuted_blocks=1 00:30:00.551 00:30:00.551 ' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:00.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.551 --rc genhtml_branch_coverage=1 00:30:00.551 --rc genhtml_function_coverage=1 00:30:00.551 --rc genhtml_legend=1 00:30:00.551 --rc geninfo_all_blocks=1 00:30:00.551 --rc geninfo_unexecuted_blocks=1 00:30:00.551 00:30:00.551 ' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.551 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.552 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.552 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.552 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.552 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.552 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.552 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.552 19:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:07.124 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:07.124 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:07.124 Found net devices under 0000:86:00.0: cvl_0_0 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:07.124 Found net devices under 0000:86:00.1: cvl_0_1 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.124 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:07.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:30:07.125 00:30:07.125 --- 10.0.0.2 ping statistics --- 00:30:07.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.125 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:30:07.125 00:30:07.125 --- 10.0.0.1 ping statistics --- 00:30:07.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.125 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3851574 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3851574 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3851574 ']' 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.125 19:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 [2024-11-20 19:07:28.787541] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:07.125 [2024-11-20 19:07:28.788495] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:30:07.125 [2024-11-20 19:07:28.788535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.125 [2024-11-20 19:07:28.872462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.125 [2024-11-20 19:07:28.912620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.125 [2024-11-20 19:07:28.912658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.125 [2024-11-20 19:07:28.912666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.125 [2024-11-20 19:07:28.912672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.125 [2024-11-20 19:07:28.912680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.125 [2024-11-20 19:07:28.913181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.125 [2024-11-20 19:07:28.980988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:07.125 [2024-11-20 19:07:28.981199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 [2024-11-20 19:07:29.053841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 Malloc0 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 [2024-11-20 19:07:29.133930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3851702 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3851702 /var/tmp/bdevperf.sock 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3851702 ']' 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:07.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.125 [2024-11-20 19:07:29.184647] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:30:07.125 [2024-11-20 19:07:29.184697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851702 ] 00:30:07.125 [2024-11-20 19:07:29.257578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.125 [2024-11-20 19:07:29.298492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.125 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:07.126 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.126 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.126 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:07.384 NVMe0n1 00:30:07.384 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.384 19:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:07.384 Running I/O for 10 seconds... 00:30:09.259 11601.00 IOPS, 45.32 MiB/s [2024-11-20T18:07:32.963Z] 11949.00 IOPS, 46.68 MiB/s [2024-11-20T18:07:33.901Z] 12215.67 IOPS, 47.72 MiB/s [2024-11-20T18:07:34.839Z] 12293.25 IOPS, 48.02 MiB/s [2024-11-20T18:07:35.775Z] 12359.40 IOPS, 48.28 MiB/s [2024-11-20T18:07:36.711Z] 12437.50 IOPS, 48.58 MiB/s [2024-11-20T18:07:37.647Z] 12439.86 IOPS, 48.59 MiB/s [2024-11-20T18:07:38.583Z] 12472.88 IOPS, 48.72 MiB/s [2024-11-20T18:07:39.959Z] 12505.56 IOPS, 48.85 MiB/s [2024-11-20T18:07:39.959Z] 12505.10 IOPS, 48.85 MiB/s 00:30:17.634 Latency(us) 00:30:17.634 [2024-11-20T18:07:39.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.634 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:17.634 Verification LBA range: start 0x0 length 0x4000 00:30:17.634 NVMe0n1 : 10.05 12537.00 48.97 0.00 0.00 81414.30 15978.30 55674.39 00:30:17.634 [2024-11-20T18:07:39.959Z] =================================================================================================================== 00:30:17.634 [2024-11-20T18:07:39.959Z] Total : 12537.00 48.97 0.00 0.00 81414.30 15978.30 55674.39 00:30:17.634 { 00:30:17.634 "results": [ 00:30:17.634 { 00:30:17.634 "job": "NVMe0n1", 00:30:17.634 "core_mask": "0x1", 00:30:17.634 "workload": "verify", 00:30:17.634 "status": "finished", 00:30:17.634 "verify_range": { 00:30:17.634 "start": 0, 00:30:17.634 "length": 16384 00:30:17.634 }, 00:30:17.634 "queue_depth": 1024, 00:30:17.634 "io_size": 4096, 00:30:17.634 "runtime": 10.054874, 00:30:17.634 "iops": 12537.004441825924, 00:30:17.634 "mibps": 48.972673600882516, 00:30:17.634 "io_failed": 0, 00:30:17.634 "io_timeout": 0, 00:30:17.634 "avg_latency_us": 81414.29684542792, 00:30:17.634 "min_latency_us": 15978.300952380952, 00:30:17.634 "max_latency_us": 55674.392380952384 00:30:17.634 } 00:30:17.634 ], 00:30:17.634 "core_count": 1 00:30:17.634 } 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3851702 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3851702 ']' 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3851702 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3851702 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3851702' 00:30:17.634 killing process with pid 3851702 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3851702 00:30:17.634 Received shutdown signal, test time was about 10.000000 seconds 00:30:17.634 00:30:17.634 Latency(us) 00:30:17.634 [2024-11-20T18:07:39.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.634 [2024-11-20T18:07:39.959Z] =================================================================================================================== 00:30:17.634 [2024-11-20T18:07:39.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3851702 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.634 rmmod nvme_tcp 00:30:17.634 rmmod nvme_fabrics 00:30:17.634 rmmod nvme_keyring 00:30:17.634 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3851574 ']' 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3851574 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3851574 ']' 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3851574 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.635 19:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3851574 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3851574' 00:30:17.894 killing process with pid 3851574 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3851574 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3851574 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.894 19:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.431 00:30:20.431 real 0m19.611s 00:30:20.431 user 0m22.594s 00:30:20.431 sys 0m6.248s 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:20.431 ************************************ 00:30:20.431 END TEST nvmf_queue_depth 00:30:20.431 ************************************ 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:20.431 ************************************ 00:30:20.431 START TEST nvmf_target_multipath 00:30:20.431 ************************************ 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:20.431 * Looking for test storage... 00:30:20.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.431 --rc genhtml_branch_coverage=1 00:30:20.431 --rc genhtml_function_coverage=1 00:30:20.431 --rc genhtml_legend=1 00:30:20.431 --rc geninfo_all_blocks=1 00:30:20.431 --rc geninfo_unexecuted_blocks=1 00:30:20.431 00:30:20.431 ' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.431 --rc genhtml_branch_coverage=1 00:30:20.431 --rc genhtml_function_coverage=1 00:30:20.431 --rc genhtml_legend=1 00:30:20.431 --rc geninfo_all_blocks=1 00:30:20.431 --rc geninfo_unexecuted_blocks=1 00:30:20.431 00:30:20.431 ' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.431 --rc genhtml_branch_coverage=1 00:30:20.431 --rc genhtml_function_coverage=1 00:30:20.431 --rc genhtml_legend=1 00:30:20.431 --rc geninfo_all_blocks=1 00:30:20.431 --rc geninfo_unexecuted_blocks=1 00:30:20.431 00:30:20.431 ' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.431 --rc genhtml_branch_coverage=1 00:30:20.431 --rc genhtml_function_coverage=1 00:30:20.431 --rc genhtml_legend=1 00:30:20.431 --rc geninfo_all_blocks=1 00:30:20.431 --rc geninfo_unexecuted_blocks=1 00:30:20.431 00:30:20.431 ' 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:20.431 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.432 19:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:27.004 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:27.005 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:27.005 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:27.005 Found net devices under 0000:86:00.0: cvl_0_0 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:27.005 Found net devices under 0000:86:00.1: cvl_0_1 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:27.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:30:27.005 00:30:27.005 --- 10.0.0.2 ping statistics --- 00:30:27.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.005 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:30:27.005 00:30:27.005 --- 10.0.0.1 ping statistics --- 00:30:27.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.005 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:27.005 only one NIC for nvmf test 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:27.005 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.006 rmmod nvme_tcp 00:30:27.006 rmmod nvme_fabrics 00:30:27.006 rmmod nvme_keyring 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.006 19:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.384 00:30:28.384 real 0m8.302s 00:30:28.384 user 0m1.742s 00:30:28.384 sys 0m4.580s 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:28.384 ************************************ 00:30:28.384 END TEST nvmf_target_multipath 00:30:28.384 ************************************ 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.384 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.644 ************************************ 00:30:28.644 START TEST nvmf_zcopy 00:30:28.644 ************************************ 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:28.644 * Looking for test storage... 00:30:28.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:28.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.644 --rc genhtml_branch_coverage=1 00:30:28.644 --rc genhtml_function_coverage=1 00:30:28.644 --rc genhtml_legend=1 00:30:28.644 --rc geninfo_all_blocks=1 00:30:28.644 --rc geninfo_unexecuted_blocks=1 00:30:28.644 00:30:28.644 ' 00:30:28.644 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:28.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.644 --rc genhtml_branch_coverage=1 00:30:28.644 --rc genhtml_function_coverage=1 00:30:28.644 --rc genhtml_legend=1 00:30:28.644 --rc geninfo_all_blocks=1 00:30:28.645 --rc geninfo_unexecuted_blocks=1 00:30:28.645 00:30:28.645 ' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:28.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.645 --rc genhtml_branch_coverage=1 00:30:28.645 --rc genhtml_function_coverage=1 00:30:28.645 --rc genhtml_legend=1 00:30:28.645 --rc geninfo_all_blocks=1 00:30:28.645 --rc geninfo_unexecuted_blocks=1 00:30:28.645 00:30:28.645 ' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:28.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.645 --rc genhtml_branch_coverage=1 00:30:28.645 --rc genhtml_function_coverage=1 00:30:28.645 --rc genhtml_legend=1 00:30:28.645 --rc geninfo_all_blocks=1 00:30:28.645 --rc geninfo_unexecuted_blocks=1 00:30:28.645 00:30:28.645 ' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.645 19:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.218 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:35.219 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:35.219 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:35.219 Found net devices under 0000:86:00.0: cvl_0_0 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:35.219 Found net devices under 0000:86:00.1: cvl_0_1 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.219 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:30:35.220 00:30:35.220 --- 10.0.0.2 ping statistics --- 00:30:35.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.220 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:30:35.220 00:30:35.220 --- 10.0.0.1 ping statistics --- 00:30:35.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.220 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3860345 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3860345 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3860345 ']' 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.220 19:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.220 [2024-11-20 19:07:56.878599] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:35.220 [2024-11-20 19:07:56.879553] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:30:35.220 [2024-11-20 19:07:56.879592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.220 [2024-11-20 19:07:56.957147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.220 [2024-11-20 19:07:56.998359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.220 [2024-11-20 19:07:56.998396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.220 [2024-11-20 19:07:56.998402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.220 [2024-11-20 19:07:56.998408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.220 [2024-11-20 19:07:56.998414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.220 [2024-11-20 19:07:56.998930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.220 [2024-11-20 19:07:57.065010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:35.220 [2024-11-20 19:07:57.065231] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.220 [2024-11-20 19:07:57.131681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.220 [2024-11-20 19:07:57.159881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:35.220 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.221 malloc0 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.221 { 00:30:35.221 "params": { 00:30:35.221 "name": "Nvme$subsystem", 00:30:35.221 "trtype": "$TEST_TRANSPORT", 00:30:35.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.221 "adrfam": "ipv4", 00:30:35.221 "trsvcid": "$NVMF_PORT", 00:30:35.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.221 "hdgst": ${hdgst:-false}, 00:30:35.221 "ddgst": ${ddgst:-false} 00:30:35.221 }, 00:30:35.221 "method": "bdev_nvme_attach_controller" 00:30:35.221 } 00:30:35.221 EOF 00:30:35.221 )") 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:35.221 19:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.221 "params": { 00:30:35.221 "name": "Nvme1", 00:30:35.221 "trtype": "tcp", 00:30:35.221 "traddr": "10.0.0.2", 00:30:35.221 "adrfam": "ipv4", 00:30:35.221 "trsvcid": "4420", 00:30:35.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.221 "hdgst": false, 00:30:35.221 "ddgst": false 00:30:35.221 }, 00:30:35.221 "method": "bdev_nvme_attach_controller" 00:30:35.221 }' 00:30:35.221 [2024-11-20 19:07:57.251743] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:30:35.221 [2024-11-20 19:07:57.251785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3860370 ] 00:30:35.221 [2024-11-20 19:07:57.323284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.221 [2024-11-20 19:07:57.363607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.480 Running I/O for 10 seconds... 00:30:37.801 8544.00 IOPS, 66.75 MiB/s [2024-11-20T18:08:00.693Z] 8557.50 IOPS, 66.86 MiB/s [2024-11-20T18:08:02.070Z] 8546.00 IOPS, 66.77 MiB/s [2024-11-20T18:08:03.005Z] 8548.50 IOPS, 66.79 MiB/s [2024-11-20T18:08:03.938Z] 8558.80 IOPS, 66.87 MiB/s [2024-11-20T18:08:04.873Z] 8567.67 IOPS, 66.93 MiB/s [2024-11-20T18:08:05.808Z] 8583.14 IOPS, 67.06 MiB/s [2024-11-20T18:08:06.742Z] 8589.00 IOPS, 67.10 MiB/s [2024-11-20T18:08:08.117Z] 8595.22 IOPS, 67.15 MiB/s [2024-11-20T18:08:08.117Z] 8597.60 IOPS, 67.17 MiB/s 00:30:45.792 Latency(us) 00:30:45.792 [2024-11-20T18:08:08.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.792 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:45.792 Verification LBA range: start 0x0 length 0x1000 00:30:45.792 Nvme1n1 : 10.05 8563.91 66.91 0.00 0.00 14849.23 2512.21 43940.33 00:30:45.792 [2024-11-20T18:08:08.117Z] =================================================================================================================== 00:30:45.792 [2024-11-20T18:08:08.117Z] Total : 8563.91 66.91 0.00 0.00 14849.23 2512.21 43940.33 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3862144 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:45.792 { 00:30:45.792 "params": { 00:30:45.792 "name": "Nvme$subsystem", 00:30:45.792 "trtype": "$TEST_TRANSPORT", 00:30:45.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.792 "adrfam": "ipv4", 00:30:45.792 "trsvcid": "$NVMF_PORT", 00:30:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.792 "hdgst": ${hdgst:-false}, 00:30:45.792 "ddgst": ${ddgst:-false} 00:30:45.792 }, 00:30:45.792 "method": "bdev_nvme_attach_controller" 00:30:45.792 } 00:30:45.792 EOF 00:30:45.792 )") 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:45.792 [2024-11-20 19:08:07.923269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.792 [2024-11-20 19:08:07.923299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:45.792 19:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:45.792 "params": { 00:30:45.792 "name": "Nvme1", 00:30:45.792 "trtype": "tcp", 00:30:45.792 "traddr": "10.0.0.2", 00:30:45.792 "adrfam": "ipv4", 00:30:45.792 "trsvcid": "4420", 00:30:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.792 "hdgst": false, 00:30:45.792 "ddgst": false 00:30:45.792 }, 00:30:45.792 "method": "bdev_nvme_attach_controller" 00:30:45.792 }' 00:30:45.792 [2024-11-20 19:08:07.935237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.792 [2024-11-20 19:08:07.935250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.792 [2024-11-20 19:08:07.947231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.792 [2024-11-20 19:08:07.947242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.792 [2024-11-20 19:08:07.959230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.792 [2024-11-20 19:08:07.959241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.792 [2024-11-20 19:08:07.960812] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:30:45.792 [2024-11-20 19:08:07.960857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862144 ] 00:30:45.792 [2024-11-20 19:08:07.971232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.792 [2024-11-20 19:08:07.971243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.792 [2024-11-20 19:08:07.983227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:07.983238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:07.995232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:07.995243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.007230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.007240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.019229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.019239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.031235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.031250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.035089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.793 [2024-11-20 19:08:08.043229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.043241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.055231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.055244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.067229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.067239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.076464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.793 [2024-11-20 19:08:08.079230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.079241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.091241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.091261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.103237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.103255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.793 [2024-11-20 19:08:08.115245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.793 [2024-11-20 19:08:08.115266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.051 [2024-11-20 19:08:08.127236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.051 [2024-11-20 19:08:08.127253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.051 [2024-11-20 19:08:08.139236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.051 [2024-11-20 19:08:08.139249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.051 [2024-11-20 19:08:08.151229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.051 [2024-11-20 19:08:08.151239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.051 [2024-11-20 19:08:08.163241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.051 [2024-11-20 19:08:08.163266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.051 [2024-11-20 19:08:08.175238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.051 [2024-11-20 19:08:08.175254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.187238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.187254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.199233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.199244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.211229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.211238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.223228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.223237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.235234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.235249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.247233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.247248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.259229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.259239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.271230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.271239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.283227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.283239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.295232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.295245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.307229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.307239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.319230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.319242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.331235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.331251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.343229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.343240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.355230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.355240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.052 [2024-11-20 19:08:08.367230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.052 [2024-11-20 19:08:08.367240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.379515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.379536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.391233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.391250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 Running I/O for 5 seconds... 00:30:46.311 [2024-11-20 19:08:08.406397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.406417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.420936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.420955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.435707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.435727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.451364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.451384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.462959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.462978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.477184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.477208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.492078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.492096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.507425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.507444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.520068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.520086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.534486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.534505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.548816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.548835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.563065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.563084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.576515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.576533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.591395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.591413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.604159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.604177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.619000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.619019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.311 [2024-11-20 19:08:08.630173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.311 [2024-11-20 19:08:08.630192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.570 [2024-11-20 19:08:08.644576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.570 [2024-11-20 19:08:08.644596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.570 [2024-11-20 19:08:08.654737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.570 [2024-11-20 19:08:08.654757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.570 [2024-11-20 19:08:08.669071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.570 [2024-11-20 19:08:08.669091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.570 [2024-11-20 19:08:08.683728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.683746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.695801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.695820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.708676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.708696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.723578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.723597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.737279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.737298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.752278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.752296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.767247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.767267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.779699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.779717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.792766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.792785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.807279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.807298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.819795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.819818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.832959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.832979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.847380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.847399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.858045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.858064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.872486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.872505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.571 [2024-11-20 19:08:08.887468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.571 [2024-11-20 19:08:08.887487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.899086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.899107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.912915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.912936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.927528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.927547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.943738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.943757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.959134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.959153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.971647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.971666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.984996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.985015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:08.999761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:08.999780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.010261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.010281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.024273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.024292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.035533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.035553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.049238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.049259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.063906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.063928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.079035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.079055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.093426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.093446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.107885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.107903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.123273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.123294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.137499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.137518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.830 [2024-11-20 19:08:09.152238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.830 [2024-11-20 19:08:09.152259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.166980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.167001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.180772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.180792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.195250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.195270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.205926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.205944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.220920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.220941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.235740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.235759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.247099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.247120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.261176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.261196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.275526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.275545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.287716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.287734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.300974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.300994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.315892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.315911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.331100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.331119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.345239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.089 [2024-11-20 19:08:09.345258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.089 [2024-11-20 19:08:09.359709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.090 [2024-11-20 19:08:09.359728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.090 [2024-11-20 19:08:09.375606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.090 [2024-11-20 19:08:09.375625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.090 [2024-11-20 19:08:09.388262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.090 [2024-11-20 19:08:09.388281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.090 16868.00 IOPS, 131.78 MiB/s [2024-11-20T18:08:09.415Z] [2024-11-20 19:08:09.403146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.090 [2024-11-20 19:08:09.403166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.417358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.417379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.431986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.432010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.447446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.447466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.460807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.460827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.475406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.475425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.486003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.486022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.500471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.500492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.515115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.515135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.529332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.529351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.543816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.543836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.555905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.555924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.568762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.568781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.583374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.583394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.593724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.593743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.608753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.608772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.622993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.623013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.637176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.637196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.651618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.651636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.349 [2024-11-20 19:08:09.664162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.349 [2024-11-20 19:08:09.664180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.677042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.677063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.691895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.691918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.707361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.707380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.718678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.718696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.733136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.733155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.748088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.748106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.763092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.763112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.777527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.777547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.792079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.792098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.803157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.803177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.817024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.817043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.831922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.831940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.843208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.843227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.856956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.856975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.867113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.867132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.880775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.880794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.891830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.891848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.904972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.904990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.609 [2024-11-20 19:08:09.919647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.609 [2024-11-20 19:08:09.919666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:09.935484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:09.935506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:09.946698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:09.946723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:09.961281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:09.961300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:09.976088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:09.976107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:09.990835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:09.990854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.002072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.002091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.017279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.017299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.032075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.032094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.043790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.043809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.056788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.056807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.071594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.071613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.087045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.087064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.101817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.101836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.116742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.116761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.131437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.131456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.142476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.142495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.157378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.157397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.172067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.172085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.869 [2024-11-20 19:08:10.187114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.869 [2024-11-20 19:08:10.187133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.128 [2024-11-20 19:08:10.199549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.199568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.213502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.213526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.228209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.228228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.243009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.243030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.257327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.257347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.271810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.271830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.286789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.286810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.301475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.301495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.315799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.315817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.331060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.331079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.344861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.344880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.359488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.359508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.370284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.370303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.385254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.385273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.399849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.399868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 16854.50 IOPS, 131.68 MiB/s [2024-11-20T18:08:10.454Z] [2024-11-20 19:08:10.415320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.415340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.429133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.429152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.129 [2024-11-20 19:08:10.443805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.129 [2024-11-20 19:08:10.443825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.459342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.459362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.473487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.473507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.487901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.487921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.503413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.503433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.516220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.516239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.531897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.531918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.543734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.543753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.557094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.557113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.571621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.571639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.583767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.583785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.596315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.596334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.611065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.611086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.624707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.624727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.639641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.639660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.654851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.654871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.668989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.669008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.683965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.683984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.388 [2024-11-20 19:08:10.698861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.388 [2024-11-20 19:08:10.698882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.713184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.713215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.727466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.727486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.737713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.737733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.752274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.752294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.766928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.766948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.781163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.781183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.795717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.795735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.807757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.807775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.820425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.820445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.835623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.835642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.847858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.847878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.861112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.861131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.875749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.875768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.891695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.891714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.906770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.906789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.921263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.921282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.935581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.935599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.950882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.950903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.647 [2024-11-20 19:08:10.964729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.647 [2024-11-20 19:08:10.964749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:10.979786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:10.979806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:10.995483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:10.995502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:11.007578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:11.007596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:11.020977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:11.020996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:11.035802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:11.035821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:11.051544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:11.051563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:11.062805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:11.062824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.906 [2024-11-20 19:08:11.076791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.906 [2024-11-20 19:08:11.076810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.091494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.091513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.102723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.102741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.116761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.116779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.131760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.131778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.144015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.144034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.156254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.156271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.170820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.170840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.185013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.185031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.200117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.200136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.215445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.215464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.907 [2024-11-20 19:08:11.229069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.907 [2024-11-20 19:08:11.229089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.244292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.244312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.259842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.259860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.275846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.275869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.287312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.287332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.301513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.301532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.315907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.315925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.331367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.331388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.345442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.345462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.359695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.359712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.371747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.371766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.385111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.385130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.399949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.399968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 16840.67 IOPS, 131.57 MiB/s [2024-11-20T18:08:11.491Z] [2024-11-20 19:08:11.412824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.412843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.427259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.427278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.438463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.438482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.453299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.453317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.467937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.467956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.166 [2024-11-20 19:08:11.482895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.166 [2024-11-20 19:08:11.482914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.495090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.495109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.508808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.508827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.523352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.523371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.534390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.534413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.549182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.549209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.564039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.564057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.579029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.579047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.592933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.592952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.607532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.607550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.622909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.622927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.635983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.636002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.648417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.648436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.663091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.663111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.677436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.677455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.691757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.691775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.705157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.705177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.719677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.719695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.735486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.735506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.426 [2024-11-20 19:08:11.749125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.426 [2024-11-20 19:08:11.749145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.764132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.764152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.779902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.779921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.794985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.795004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.808693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.808716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.823750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.823768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.839088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.839107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.852396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.852415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.867126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.867145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.879801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.879820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.895281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.895301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.906486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.906507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.921106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.921127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.935794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.935813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.951617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.951638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.965023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.965042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.979694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.979713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.686 [2024-11-20 19:08:11.995555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.686 [2024-11-20 19:08:11.995575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.011391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.011412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.025092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.025112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.039703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.039722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.055112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.055132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.068975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.068995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.083724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.083742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.096339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.096359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.110952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.110972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.124868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.124888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.139488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.139508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.152219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.152238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.163653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.163672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.176922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.176942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.945 [2024-11-20 19:08:12.191432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.945 [2024-11-20 19:08:12.191452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.946 [2024-11-20 19:08:12.202060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.946 [2024-11-20 19:08:12.202080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.946 [2024-11-20 19:08:12.216964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.946 [2024-11-20 19:08:12.216983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.946 [2024-11-20 19:08:12.231289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.946 [2024-11-20 19:08:12.231308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.946 [2024-11-20 19:08:12.242910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.946 [2024-11-20 19:08:12.242930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.946 [2024-11-20 19:08:12.256783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.946 [2024-11-20 19:08:12.256802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.204 [2024-11-20 19:08:12.271899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.204 [2024-11-20 19:08:12.271920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.204 [2024-11-20 19:08:12.287077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.204 [2024-11-20 19:08:12.287099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.204 [2024-11-20 19:08:12.301344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.204 [2024-11-20 19:08:12.301363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.204 [2024-11-20 19:08:12.316156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.204 [2024-11-20 19:08:12.316175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.204 [2024-11-20 19:08:12.331228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.331248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.345551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.345570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.360193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.360218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.374813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.374832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.389141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.389160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.403681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.403700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 16865.50 IOPS, 131.76 MiB/s [2024-11-20T18:08:12.530Z] [2024-11-20 19:08:12.416226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.416245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.428848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.428867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.443326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.443345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.456165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.456184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.470843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.470863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.484423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.484444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.499363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.499382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.510438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.510459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.205 [2024-11-20 19:08:12.525229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.205 [2024-11-20 19:08:12.525249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.463 [2024-11-20 19:08:12.539981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.540001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.551440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.551459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.564802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.564821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.574900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.574919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.589055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.589079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.603757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.603776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.619555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.619574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.631557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.631575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.644899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.644918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.659718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.659736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.675188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.675212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.688189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.688214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.699345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.699363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.713324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.713343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.728136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.728154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.743564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.743583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.754860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.754879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.768988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.769007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.464 [2024-11-20 19:08:12.783846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.464 [2024-11-20 19:08:12.783865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.722 [2024-11-20 19:08:12.798705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.722 [2024-11-20 19:08:12.798725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.722 [2024-11-20 19:08:12.812723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.722 [2024-11-20 19:08:12.812743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.827141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.827160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.840549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.840568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.855708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.855733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.871247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.871266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.882515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.882539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.896927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.896947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.911575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.911594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.923573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.923595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.939026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.939045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.952020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.952039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.963229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.963251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.977302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.977322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:12.992024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:12.992043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:13.006788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:13.006808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:13.020287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:13.020307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.723 [2024-11-20 19:08:13.035094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.723 [2024-11-20 19:08:13.035114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.049676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.049696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.064737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.064756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.078951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.078970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.090238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.090257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.105200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.105224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.119562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.119586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.131998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.132017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.147026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.147045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.159767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.159786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.175200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.175223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.188052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.188070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.200969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.200987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.982 [2024-11-20 19:08:13.215860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.982 [2024-11-20 19:08:13.215878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.983 [2024-11-20 19:08:13.231294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.983 [2024-11-20 19:08:13.231314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.983 [2024-11-20 19:08:13.245087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.983 [2024-11-20 19:08:13.245106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.983 [2024-11-20 19:08:13.259522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.983 [2024-11-20 19:08:13.259540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.983 [2024-11-20 19:08:13.274561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.983 [2024-11-20 19:08:13.274580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.983 [2024-11-20 19:08:13.289011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.983 [2024-11-20 19:08:13.289029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.983 [2024-11-20 19:08:13.303750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.983 [2024-11-20 19:08:13.303768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.318971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.318991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.332968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.332989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.348312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.348332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.363008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.363028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.376551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.376570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.387807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.387830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.401140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.401159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 16858.80 IOPS, 131.71 MiB/s [2024-11-20T18:08:13.568Z] [2024-11-20 19:08:13.412833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.412853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 00:30:51.243 Latency(us) 00:30:51.243 [2024-11-20T18:08:13.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.243 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:51.243 Nvme1n1 : 5.01 16860.41 131.72 0.00 0.00 7584.13 1950.48 12732.71 00:30:51.243 [2024-11-20T18:08:13.568Z] =================================================================================================================== 00:30:51.243 [2024-11-20T18:08:13.568Z] Total : 16860.41 131.72 0.00 0.00 7584.13 1950.48 12732.71 00:30:51.243 [2024-11-20 19:08:13.423234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.423251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.435238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.435252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.447248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.447268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.459238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.459253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.471240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.471255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.483233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.483248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.495235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.495251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.507233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.507248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.519235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.519249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.531230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.531241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.543234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.543246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.243 [2024-11-20 19:08:13.555231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.243 [2024-11-20 19:08:13.555243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.502 [2024-11-20 19:08:13.567241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.502 [2024-11-20 19:08:13.567260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3862144) - No such process 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3862144 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:51.502 delay0 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.502 19:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:51.502 [2024-11-20 19:08:13.675137] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:58.189 Initializing NVMe Controllers 00:30:58.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.189 Initialization complete. Launching workers. 00:30:58.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3352 00:30:58.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3633, failed to submit 39 00:30:58.189 success 3509, unsuccessful 124, failed 0 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:58.189 rmmod nvme_tcp 00:30:58.189 rmmod nvme_fabrics 00:30:58.189 rmmod nvme_keyring 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3860345 ']' 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3860345 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3860345 ']' 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3860345 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3860345 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3860345' 00:30:58.189 killing process with pid 3860345 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3860345 00:30:58.189 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3860345 00:30:58.448 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.449 19:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.354 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:00.354 00:31:00.354 real 0m31.958s 00:31:00.354 user 0m41.449s 00:31:00.354 sys 0m12.621s 00:31:00.354 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.354 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:00.354 ************************************ 00:31:00.354 END TEST nvmf_zcopy 00:31:00.354 ************************************ 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:00.613 ************************************ 00:31:00.613 START TEST nvmf_nmic 00:31:00.613 ************************************ 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:00.613 * Looking for test storage... 00:31:00.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:00.613 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:00.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.614 --rc genhtml_branch_coverage=1 00:31:00.614 --rc genhtml_function_coverage=1 00:31:00.614 --rc genhtml_legend=1 00:31:00.614 --rc geninfo_all_blocks=1 00:31:00.614 --rc geninfo_unexecuted_blocks=1 00:31:00.614 00:31:00.614 ' 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:00.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.614 --rc genhtml_branch_coverage=1 00:31:00.614 --rc genhtml_function_coverage=1 00:31:00.614 --rc genhtml_legend=1 00:31:00.614 --rc geninfo_all_blocks=1 00:31:00.614 --rc geninfo_unexecuted_blocks=1 00:31:00.614 00:31:00.614 ' 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:00.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.614 --rc genhtml_branch_coverage=1 00:31:00.614 --rc genhtml_function_coverage=1 00:31:00.614 --rc genhtml_legend=1 00:31:00.614 --rc geninfo_all_blocks=1 00:31:00.614 --rc geninfo_unexecuted_blocks=1 00:31:00.614 00:31:00.614 ' 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:00.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.614 --rc genhtml_branch_coverage=1 00:31:00.614 --rc genhtml_function_coverage=1 00:31:00.614 --rc genhtml_legend=1 00:31:00.614 --rc geninfo_all_blocks=1 00:31:00.614 --rc geninfo_unexecuted_blocks=1 00:31:00.614 00:31:00.614 ' 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.614 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:00.873 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.873 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.873 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.873 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.873 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.873 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.873 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:00.874 19:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.443 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:07.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:07.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:07.444 Found net devices under 0000:86:00.0: cvl_0_0 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:07.444 Found net devices under 0000:86:00.1: cvl_0_1 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:31:07.444 00:31:07.444 --- 10.0.0.2 ping statistics --- 00:31:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.444 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:31:07.444 00:31:07.444 --- 10.0.0.1 ping statistics --- 00:31:07.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.444 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:07.444 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3867555 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3867555 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3867555 ']' 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.445 19:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 [2024-11-20 19:08:28.940750] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:07.445 [2024-11-20 19:08:28.941639] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:31:07.445 [2024-11-20 19:08:28.941672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.445 [2024-11-20 19:08:29.020226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:07.445 [2024-11-20 19:08:29.062950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.445 [2024-11-20 19:08:29.062988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.445 [2024-11-20 19:08:29.062995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.445 [2024-11-20 19:08:29.063001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.445 [2024-11-20 19:08:29.063006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.445 [2024-11-20 19:08:29.064422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.445 [2024-11-20 19:08:29.064529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.445 [2024-11-20 19:08:29.064638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.445 [2024-11-20 19:08:29.064639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.445 [2024-11-20 19:08:29.131997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:07.445 [2024-11-20 19:08:29.132989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:07.445 [2024-11-20 19:08:29.133036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:07.445 [2024-11-20 19:08:29.133333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:07.445 [2024-11-20 19:08:29.133404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 [2024-11-20 19:08:29.197327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 Malloc0 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 [2024-11-20 19:08:29.281543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:07.445 test case1: single bdev can't be used in multiple subsystems 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 [2024-11-20 19:08:29.312991] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:07.445 [2024-11-20 19:08:29.313026] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:07.445 [2024-11-20 19:08:29.313033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.445 request: 00:31:07.445 { 00:31:07.445 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:07.445 "namespace": { 00:31:07.445 "bdev_name": "Malloc0", 00:31:07.445 "no_auto_visible": false 00:31:07.445 }, 00:31:07.445 "method": "nvmf_subsystem_add_ns", 00:31:07.445 "req_id": 1 00:31:07.445 } 00:31:07.445 Got JSON-RPC error response 00:31:07.445 response: 00:31:07.445 { 00:31:07.445 "code": -32602, 00:31:07.445 "message": "Invalid parameters" 00:31:07.445 } 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:07.445 Adding namespace failed - expected result. 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:07.445 test case2: host connect to nvmf target in multiple paths 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.445 [2024-11-20 19:08:29.325083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:07.445 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:07.703 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:07.703 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:07.703 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:07.704 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:07.704 19:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:09.603 19:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:09.603 19:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:09.603 19:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:09.603 19:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:09.603 19:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:09.603 19:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:09.603 19:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:09.603 [global] 00:31:09.603 thread=1 00:31:09.603 invalidate=1 00:31:09.603 rw=write 00:31:09.603 time_based=1 00:31:09.603 runtime=1 00:31:09.603 ioengine=libaio 00:31:09.603 direct=1 00:31:09.603 bs=4096 00:31:09.603 iodepth=1 00:31:09.603 norandommap=0 00:31:09.603 numjobs=1 00:31:09.603 00:31:09.603 verify_dump=1 00:31:09.603 verify_backlog=512 00:31:09.603 verify_state_save=0 00:31:09.603 do_verify=1 00:31:09.603 verify=crc32c-intel 00:31:09.603 [job0] 00:31:09.603 filename=/dev/nvme0n1 00:31:09.603 Could not set queue depth (nvme0n1) 00:31:09.862 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:09.862 fio-3.35 00:31:09.862 Starting 1 thread 00:31:11.240 00:31:11.240 job0: (groupid=0, jobs=1): err= 0: pid=3868166: Wed Nov 20 19:08:33 2024 00:31:11.240 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:11.240 slat (nsec): min=6360, max=30999, avg=7247.10, stdev=999.30 00:31:11.240 clat (usec): min=183, max=407, avg=208.18, stdev=13.11 00:31:11.240 lat (usec): min=189, max=438, avg=215.43, stdev=13.33 00:31:11.240 clat percentiles (usec): 00:31:11.240 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:31:11.240 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:31:11.240 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 221], 95.00th=[ 227], 00:31:11.240 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 277], 99.95th=[ 281], 00:31:11.240 | 99.99th=[ 408] 00:31:11.240 write: IOPS=2974, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:31:11.240 slat (nsec): min=9052, max=44101, avg=10198.82, stdev=1323.28 00:31:11.240 clat (usec): min=123, max=326, avg=136.67, stdev= 8.48 00:31:11.240 lat (usec): min=136, max=370, avg=146.87, stdev= 8.86 00:31:11.240 clat percentiles (usec): 00:31:11.240 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 133], 00:31:11.240 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 137], 00:31:11.240 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 147], 00:31:11.240 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 245], 99.95th=[ 249], 00:31:11.240 | 99.99th=[ 326] 00:31:11.240 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:11.240 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:11.240 lat (usec) : 250=98.70%, 500=1.30% 00:31:11.240 cpu : usr=3.00%, sys=4.60%, ctx=5537, majf=0, minf=1 00:31:11.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.240 issued rwts: total=2560,2977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:11.240 00:31:11.240 Run status group 0 (all jobs): 00:31:11.240 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:31:11.240 WRITE: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=11.6MiB (12.2MB), run=1001-1001msec 00:31:11.240 00:31:11.240 Disk stats (read/write): 00:31:11.240 nvme0n1: ios=2419/2560, merge=0/0, ticks=720/327, in_queue=1047, util=95.49% 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:11.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:11.240 rmmod nvme_tcp 00:31:11.240 rmmod nvme_fabrics 00:31:11.240 rmmod nvme_keyring 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3867555 ']' 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3867555 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3867555 ']' 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3867555 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.240 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3867555 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3867555' 00:31:11.499 killing process with pid 3867555 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3867555 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3867555 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.499 19:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.036 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.036 00:31:14.036 real 0m13.093s 00:31:14.036 user 0m24.060s 00:31:14.036 sys 0m6.191s 00:31:14.036 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.036 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.036 ************************************ 00:31:14.036 END TEST nvmf_nmic 00:31:14.036 ************************************ 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.037 ************************************ 00:31:14.037 START TEST nvmf_fio_target 00:31:14.037 ************************************ 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:14.037 * Looking for test storage... 00:31:14.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:14.037 19:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.037 --rc genhtml_branch_coverage=1 00:31:14.037 --rc genhtml_function_coverage=1 00:31:14.037 --rc genhtml_legend=1 00:31:14.037 --rc geninfo_all_blocks=1 00:31:14.037 --rc geninfo_unexecuted_blocks=1 00:31:14.037 00:31:14.037 ' 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.037 --rc genhtml_branch_coverage=1 00:31:14.037 --rc genhtml_function_coverage=1 00:31:14.037 --rc genhtml_legend=1 00:31:14.037 --rc geninfo_all_blocks=1 00:31:14.037 --rc geninfo_unexecuted_blocks=1 00:31:14.037 00:31:14.037 ' 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.037 --rc genhtml_branch_coverage=1 00:31:14.037 --rc genhtml_function_coverage=1 00:31:14.037 --rc genhtml_legend=1 00:31:14.037 --rc geninfo_all_blocks=1 00:31:14.037 --rc geninfo_unexecuted_blocks=1 00:31:14.037 00:31:14.037 ' 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.037 --rc genhtml_branch_coverage=1 00:31:14.037 --rc genhtml_function_coverage=1 00:31:14.037 --rc genhtml_legend=1 00:31:14.037 --rc geninfo_all_blocks=1 00:31:14.037 --rc geninfo_unexecuted_blocks=1 00:31:14.037 00:31:14.037 ' 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.037 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.038 19:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:20.611 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:20.611 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:20.611 Found net devices under 0000:86:00.0: cvl_0_0 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:20.611 Found net devices under 0000:86:00.1: cvl_0_1 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.611 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:31:20.612 00:31:20.612 --- 10.0.0.2 ping statistics --- 00:31:20.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.612 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:31:20.612 00:31:20.612 --- 10.0.0.1 ping statistics --- 00:31:20.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.612 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.612 19:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3871934 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3871934 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3871934 ']' 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.612 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.612 [2024-11-20 19:08:42.083443] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.612 [2024-11-20 19:08:42.084474] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:31:20.612 [2024-11-20 19:08:42.084518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.612 [2024-11-20 19:08:42.166643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.612 [2024-11-20 19:08:42.210428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.612 [2024-11-20 19:08:42.210467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.612 [2024-11-20 19:08:42.210474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.612 [2024-11-20 19:08:42.210479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.612 [2024-11-20 19:08:42.210485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.612 [2024-11-20 19:08:42.214223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.612 [2024-11-20 19:08:42.214257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.612 [2024-11-20 19:08:42.214365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.612 [2024-11-20 19:08:42.214366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.612 [2024-11-20 19:08:42.283286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.612 [2024-11-20 19:08:42.283930] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.612 [2024-11-20 19:08:42.284063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.612 [2024-11-20 19:08:42.284444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.612 [2024-11-20 19:08:42.284587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:20.871 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.871 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:20.871 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.871 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.871 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.872 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.872 19:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:20.872 [2024-11-20 19:08:43.131067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.872 19:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.131 19:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:21.131 19:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.391 19:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:21.391 19:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.651 19:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:21.651 19:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.910 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:21.910 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:21.910 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:22.169 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:22.169 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:22.427 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:22.427 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:22.687 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:22.687 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:22.687 19:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:22.946 19:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:22.946 19:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.204 19:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:23.204 19:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:23.464 19:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.464 [2024-11-20 19:08:45.731046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.464 19:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:23.723 19:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:23.982 19:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:24.241 19:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:24.241 19:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:24.241 19:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:24.241 19:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:24.241 19:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:24.241 19:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:26.144 19:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:26.144 19:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:26.144 19:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:26.144 19:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:26.403 19:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:26.403 19:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:26.403 19:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:26.403 [global] 00:31:26.403 thread=1 00:31:26.403 invalidate=1 00:31:26.403 rw=write 00:31:26.403 time_based=1 00:31:26.403 runtime=1 00:31:26.403 ioengine=libaio 00:31:26.403 direct=1 00:31:26.403 bs=4096 00:31:26.403 iodepth=1 00:31:26.403 norandommap=0 00:31:26.403 numjobs=1 00:31:26.403 00:31:26.403 verify_dump=1 00:31:26.403 verify_backlog=512 00:31:26.403 verify_state_save=0 00:31:26.403 do_verify=1 00:31:26.403 verify=crc32c-intel 00:31:26.403 [job0] 00:31:26.403 filename=/dev/nvme0n1 00:31:26.403 [job1] 00:31:26.403 filename=/dev/nvme0n2 00:31:26.403 [job2] 00:31:26.403 filename=/dev/nvme0n3 00:31:26.403 [job3] 00:31:26.403 filename=/dev/nvme0n4 00:31:26.403 Could not set queue depth (nvme0n1) 00:31:26.403 Could not set queue depth (nvme0n2) 00:31:26.403 Could not set queue depth (nvme0n3) 00:31:26.403 Could not set queue depth (nvme0n4) 00:31:26.662 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.662 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.662 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.662 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.662 fio-3.35 00:31:26.662 Starting 4 threads 00:31:28.049 00:31:28.049 job0: (groupid=0, jobs=1): err= 0: pid=3873269: Wed Nov 20 19:08:50 2024 00:31:28.049 read: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(9.81MiB/1001msec) 00:31:28.049 slat (nsec): min=7047, max=41010, avg=8138.66, stdev=1554.86 00:31:28.049 clat (usec): min=153, max=302, avg=205.80, stdev=16.91 00:31:28.049 lat (usec): min=177, max=310, avg=213.94, stdev=16.96 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:31:28.049 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:31:28.049 | 70.00th=[ 208], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 245], 00:31:28.049 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 302], 00:31:28.049 | 99.99th=[ 302] 00:31:28.049 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:28.049 slat (nsec): min=10604, max=47485, avg=11918.16, stdev=1851.52 00:31:28.049 clat (usec): min=126, max=1237, avg=162.93, stdev=29.52 00:31:28.049 lat (usec): min=138, max=1248, avg=174.84, stdev=29.66 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 147], 00:31:28.049 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:31:28.049 | 70.00th=[ 172], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 196], 00:31:28.049 | 99.00th=[ 215], 99.50th=[ 253], 99.90th=[ 265], 99.95th=[ 273], 00:31:28.049 | 99.99th=[ 1237] 00:31:28.049 bw ( KiB/s): min=12288, max=12288, per=55.32%, avg=12288.00, stdev= 0.00, samples=1 00:31:28.049 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:28.049 lat (usec) : 250=97.89%, 500=2.09% 00:31:28.049 lat (msec) : 2=0.02% 00:31:28.049 cpu : usr=4.00%, sys=8.30%, ctx=5073, majf=0, minf=1 00:31:28.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 issued rwts: total=2512,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.049 job1: (groupid=0, jobs=1): err= 0: pid=3873270: Wed Nov 20 19:08:50 2024 00:31:28.049 read: IOPS=83, BW=334KiB/s (342kB/s)(340KiB/1019msec) 00:31:28.049 slat (nsec): min=8460, max=34420, avg=11435.06, stdev=4716.67 00:31:28.049 clat (usec): min=196, max=42001, avg=10793.30, stdev=17971.18 00:31:28.049 lat (usec): min=206, max=42012, avg=10804.74, stdev=17973.90 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:31:28.049 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 269], 00:31:28.049 | 70.00th=[ 306], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:28.049 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:28.049 | 99.99th=[42206] 00:31:28.049 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:31:28.049 slat (nsec): min=11130, max=37216, avg=12500.01, stdev=1597.84 00:31:28.049 clat (usec): min=146, max=303, avg=180.04, stdev=12.21 00:31:28.049 lat (usec): min=158, max=340, avg=192.54, stdev=12.64 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:31:28.049 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:31:28.049 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:31:28.049 | 99.00th=[ 208], 99.50th=[ 210], 99.90th=[ 306], 99.95th=[ 306], 00:31:28.049 | 99.99th=[ 306] 00:31:28.049 bw ( KiB/s): min= 4096, max= 4096, per=18.44%, avg=4096.00, stdev= 0.00, samples=1 00:31:28.049 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:28.049 lat (usec) : 250=93.63%, 500=2.68% 00:31:28.049 lat (msec) : 50=3.69% 00:31:28.049 cpu : usr=0.49%, sys=1.08%, ctx=597, majf=0, minf=1 00:31:28.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 issued rwts: total=85,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.049 job2: (groupid=0, jobs=1): err= 0: pid=3873271: Wed Nov 20 19:08:50 2024 00:31:28.049 read: IOPS=37, BW=150KiB/s (154kB/s)(152KiB/1012msec) 00:31:28.049 slat (nsec): min=7671, max=25989, avg=16873.84, stdev=7409.74 00:31:28.049 clat (usec): min=222, max=41030, avg=23843.18, stdev=20343.21 00:31:28.049 lat (usec): min=230, max=41052, avg=23860.06, stdev=20349.79 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 223], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:31:28.049 | 30.00th=[ 306], 40.00th=[ 367], 50.00th=[41157], 60.00th=[41157], 00:31:28.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:28.049 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:28.049 | 99.99th=[41157] 00:31:28.049 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:31:28.049 slat (nsec): min=10746, max=47646, avg=13270.88, stdev=3395.86 00:31:28.049 clat (usec): min=162, max=333, avg=189.01, stdev=14.08 00:31:28.049 lat (usec): min=177, max=380, avg=202.28, stdev=15.15 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:31:28.049 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:31:28.049 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 202], 95.00th=[ 208], 00:31:28.049 | 99.00th=[ 233], 99.50th=[ 255], 99.90th=[ 334], 99.95th=[ 334], 00:31:28.049 | 99.99th=[ 334] 00:31:28.049 bw ( KiB/s): min= 4096, max= 4096, per=18.44%, avg=4096.00, stdev= 0.00, samples=1 00:31:28.049 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:28.049 lat (usec) : 250=92.73%, 500=3.27% 00:31:28.049 lat (msec) : 50=4.00% 00:31:28.049 cpu : usr=0.40%, sys=1.09%, ctx=550, majf=0, minf=1 00:31:28.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 issued rwts: total=38,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.049 job3: (groupid=0, jobs=1): err= 0: pid=3873272: Wed Nov 20 19:08:50 2024 00:31:28.049 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:28.049 slat (nsec): min=7485, max=38954, avg=8643.14, stdev=1437.30 00:31:28.049 clat (usec): min=220, max=41338, avg=271.44, stdev=908.13 00:31:28.049 lat (usec): min=228, max=41346, avg=280.09, stdev=908.12 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 239], 00:31:28.049 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:31:28.049 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 302], 00:31:28.049 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 416], 99.95th=[ 429], 00:31:28.049 | 99.99th=[41157] 00:31:28.049 write: IOPS=2072, BW=8292KiB/s (8491kB/s)(8300KiB/1001msec); 0 zone resets 00:31:28.049 slat (nsec): min=10612, max=47445, avg=12051.12, stdev=2150.56 00:31:28.049 clat (usec): min=137, max=390, avg=187.54, stdev=28.65 00:31:28.049 lat (usec): min=164, max=406, avg=199.59, stdev=28.80 00:31:28.049 clat percentiles (usec): 00:31:28.049 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:31:28.049 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:31:28.049 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 221], 95.00th=[ 269], 00:31:28.049 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 297], 99.95th=[ 343], 00:31:28.049 | 99.99th=[ 392] 00:31:28.049 bw ( KiB/s): min= 9032, max= 9032, per=40.66%, avg=9032.00, stdev= 0.00, samples=1 00:31:28.049 iops : min= 2258, max= 2258, avg=2258.00, stdev= 0.00, samples=1 00:31:28.049 lat (usec) : 250=78.22%, 500=21.76% 00:31:28.049 lat (msec) : 50=0.02% 00:31:28.049 cpu : usr=3.70%, sys=6.50%, ctx=4123, majf=0, minf=1 00:31:28.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.049 issued rwts: total=2048,2075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:28.049 00:31:28.049 Run status group 0 (all jobs): 00:31:28.049 READ: bw=18.0MiB/s (18.8MB/s), 150KiB/s-9.80MiB/s (154kB/s-10.3MB/s), io=18.3MiB (19.2MB), run=1001-1019msec 00:31:28.049 WRITE: bw=21.7MiB/s (22.7MB/s), 2010KiB/s-9.99MiB/s (2058kB/s-10.5MB/s), io=22.1MiB (23.2MB), run=1001-1019msec 00:31:28.049 00:31:28.049 Disk stats (read/write): 00:31:28.049 nvme0n1: ios=2100/2273, merge=0/0, ticks=1243/347, in_queue=1590, util=98.00% 00:31:28.049 nvme0n2: ios=93/512, merge=0/0, ticks=800/83, in_queue=883, util=91.06% 00:31:28.049 nvme0n3: ios=34/512, merge=0/0, ticks=743/90, in_queue=833, util=89.05% 00:31:28.049 nvme0n4: ios=1536/1999, merge=0/0, ticks=412/359, in_queue=771, util=89.71% 00:31:28.049 19:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:28.049 [global] 00:31:28.049 thread=1 00:31:28.049 invalidate=1 00:31:28.049 rw=randwrite 00:31:28.049 time_based=1 00:31:28.049 runtime=1 00:31:28.049 ioengine=libaio 00:31:28.049 direct=1 00:31:28.049 bs=4096 00:31:28.049 iodepth=1 00:31:28.049 norandommap=0 00:31:28.049 numjobs=1 00:31:28.049 00:31:28.049 verify_dump=1 00:31:28.050 verify_backlog=512 00:31:28.050 verify_state_save=0 00:31:28.050 do_verify=1 00:31:28.050 verify=crc32c-intel 00:31:28.050 [job0] 00:31:28.050 filename=/dev/nvme0n1 00:31:28.050 [job1] 00:31:28.050 filename=/dev/nvme0n2 00:31:28.050 [job2] 00:31:28.050 filename=/dev/nvme0n3 00:31:28.050 [job3] 00:31:28.050 filename=/dev/nvme0n4 00:31:28.050 Could not set queue depth (nvme0n1) 00:31:28.050 Could not set queue depth (nvme0n2) 00:31:28.050 Could not set queue depth (nvme0n3) 00:31:28.050 Could not set queue depth (nvme0n4) 00:31:28.307 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.307 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.307 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.307 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.307 fio-3.35 00:31:28.307 Starting 4 threads 00:31:29.675 00:31:29.675 job0: (groupid=0, jobs=1): err= 0: pid=3873646: Wed Nov 20 19:08:51 2024 00:31:29.675 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(100KiB/1036msec) 00:31:29.675 slat (nsec): min=8025, max=24102, avg=20637.12, stdev=4365.09 00:31:29.675 clat (usec): min=232, max=41092, avg=37702.99, stdev=11274.75 00:31:29.675 lat (usec): min=241, max=41114, avg=37723.63, stdev=11278.03 00:31:29.675 clat percentiles (usec): 00:31:29.675 | 1.00th=[ 233], 5.00th=[ 251], 10.00th=[40633], 20.00th=[40633], 00:31:29.675 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:29.675 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:29.675 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:29.675 | 99.99th=[41157] 00:31:29.675 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:31:29.675 slat (nsec): min=8718, max=55987, avg=11411.22, stdev=2457.07 00:31:29.675 clat (usec): min=144, max=265, avg=166.95, stdev= 9.72 00:31:29.675 lat (usec): min=155, max=321, avg=178.36, stdev=10.97 00:31:29.675 clat percentiles (usec): 00:31:29.675 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:31:29.675 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:31:29.675 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 178], 95.00th=[ 184], 00:31:29.675 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 265], 99.95th=[ 265], 00:31:29.675 | 99.99th=[ 265] 00:31:29.675 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:31:29.675 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:29.675 lat (usec) : 250=95.34%, 500=0.37% 00:31:29.675 lat (msec) : 50=4.28% 00:31:29.675 cpu : usr=0.48%, sys=0.29%, ctx=538, majf=0, minf=1 00:31:29.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.675 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.675 job1: (groupid=0, jobs=1): err= 0: pid=3873647: Wed Nov 20 19:08:51 2024 00:31:29.675 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(100KiB/1041msec) 00:31:29.675 slat (nsec): min=9032, max=25677, avg=21157.72, stdev=4377.55 00:31:29.675 clat (usec): min=498, max=41439, avg=37746.61, stdev=11210.76 00:31:29.675 lat (usec): min=522, max=41450, avg=37767.77, stdev=11209.56 00:31:29.675 clat percentiles (usec): 00:31:29.675 | 1.00th=[ 498], 5.00th=[ 506], 10.00th=[40633], 20.00th=[40633], 00:31:29.675 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:29.675 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:29.676 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:29.676 | 99.99th=[41681] 00:31:29.676 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:31:29.676 slat (nsec): min=8984, max=40588, avg=10000.24, stdev=1736.15 00:31:29.676 clat (usec): min=153, max=349, avg=176.72, stdev=16.30 00:31:29.676 lat (usec): min=163, max=389, avg=186.72, stdev=16.94 00:31:29.676 clat percentiles (usec): 00:31:29.676 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 165], 00:31:29.676 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:31:29.676 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 210], 00:31:29.676 | 99.00th=[ 231], 99.50th=[ 235], 99.90th=[ 351], 99.95th=[ 351], 00:31:29.676 | 99.99th=[ 351] 00:31:29.676 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:31:29.676 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:29.676 lat (usec) : 250=95.16%, 500=0.37%, 750=0.19% 00:31:29.676 lat (msec) : 50=4.28% 00:31:29.676 cpu : usr=0.10%, sys=0.67%, ctx=537, majf=0, minf=1 00:31:29.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.676 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.676 job2: (groupid=0, jobs=1): err= 0: pid=3873648: Wed Nov 20 19:08:51 2024 00:31:29.676 read: IOPS=35, BW=143KiB/s (146kB/s)(144KiB/1010msec) 00:31:29.676 slat (nsec): min=10911, max=26855, avg=18544.50, stdev=5698.17 00:31:29.676 clat (usec): min=252, max=42290, avg=25060.57, stdev=20068.29 00:31:29.676 lat (usec): min=274, max=42314, avg=25079.11, stdev=20064.73 00:31:29.676 clat percentiles (usec): 00:31:29.676 | 1.00th=[ 253], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 258], 00:31:29.676 | 30.00th=[ 265], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:31:29.676 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:31:29.676 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:29.676 | 99.99th=[42206] 00:31:29.676 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:31:29.676 slat (nsec): min=8208, max=33459, avg=11538.31, stdev=2015.55 00:31:29.676 clat (usec): min=146, max=371, avg=193.47, stdev=18.57 00:31:29.676 lat (usec): min=157, max=404, avg=205.01, stdev=19.04 00:31:29.676 clat percentiles (usec): 00:31:29.676 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:31:29.676 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:31:29.676 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 223], 00:31:29.676 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 371], 99.95th=[ 371], 00:31:29.676 | 99.99th=[ 371] 00:31:29.676 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:31:29.676 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:29.676 lat (usec) : 250=93.25%, 500=2.74% 00:31:29.676 lat (msec) : 50=4.01% 00:31:29.676 cpu : usr=0.59%, sys=0.79%, ctx=550, majf=0, minf=1 00:31:29.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.676 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.676 job3: (groupid=0, jobs=1): err= 0: pid=3873649: Wed Nov 20 19:08:51 2024 00:31:29.676 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:31:29.676 slat (nsec): min=10225, max=22992, avg=21584.45, stdev=2556.00 00:31:29.676 clat (usec): min=40769, max=41047, avg=40955.79, stdev=59.75 00:31:29.676 lat (usec): min=40779, max=41069, avg=40977.37, stdev=61.50 00:31:29.676 clat percentiles (usec): 00:31:29.676 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:29.676 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:29.676 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:29.676 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:29.676 | 99.99th=[41157] 00:31:29.676 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:29.676 slat (nsec): min=10369, max=35346, avg=11822.43, stdev=1880.63 00:31:29.676 clat (usec): min=150, max=296, avg=188.55, stdev=18.53 00:31:29.676 lat (usec): min=161, max=331, avg=200.37, stdev=18.94 00:31:29.676 clat percentiles (usec): 00:31:29.676 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:31:29.676 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:31:29.676 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 219], 00:31:29.676 | 99.00th=[ 233], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 297], 00:31:29.676 | 99.99th=[ 297] 00:31:29.676 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:31:29.676 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:29.676 lat (usec) : 250=95.13%, 500=0.75% 00:31:29.676 lat (msec) : 50=4.12% 00:31:29.676 cpu : usr=0.40%, sys=1.00%, ctx=534, majf=0, minf=1 00:31:29.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.676 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.676 00:31:29.676 Run status group 0 (all jobs): 00:31:29.676 READ: bw=415KiB/s (425kB/s), 87.5KiB/s-143KiB/s (89.6kB/s-146kB/s), io=432KiB (442kB), run=1006-1041msec 00:31:29.676 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2036KiB/s (2015kB/s-2085kB/s), io=8192KiB (8389kB), run=1006-1041msec 00:31:29.676 00:31:29.676 Disk stats (read/write): 00:31:29.676 nvme0n1: ios=70/512, merge=0/0, ticks=766/82, in_queue=848, util=86.27% 00:31:29.676 nvme0n2: ios=32/512, merge=0/0, ticks=1081/89, in_queue=1170, util=90.48% 00:31:29.676 nvme0n3: ios=65/512, merge=0/0, ticks=1168/98, in_queue=1266, util=99.06% 00:31:29.676 nvme0n4: ios=18/512, merge=0/0, ticks=738/95, in_queue=833, util=89.65% 00:31:29.676 19:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:29.676 [global] 00:31:29.676 thread=1 00:31:29.676 invalidate=1 00:31:29.676 rw=write 00:31:29.676 time_based=1 00:31:29.676 runtime=1 00:31:29.676 ioengine=libaio 00:31:29.676 direct=1 00:31:29.676 bs=4096 00:31:29.676 iodepth=128 00:31:29.676 norandommap=0 00:31:29.676 numjobs=1 00:31:29.676 00:31:29.676 verify_dump=1 00:31:29.676 verify_backlog=512 00:31:29.676 verify_state_save=0 00:31:29.676 do_verify=1 00:31:29.676 verify=crc32c-intel 00:31:29.676 [job0] 00:31:29.676 filename=/dev/nvme0n1 00:31:29.676 [job1] 00:31:29.676 filename=/dev/nvme0n2 00:31:29.676 [job2] 00:31:29.676 filename=/dev/nvme0n3 00:31:29.676 [job3] 00:31:29.676 filename=/dev/nvme0n4 00:31:29.676 Could not set queue depth (nvme0n1) 00:31:29.676 Could not set queue depth (nvme0n2) 00:31:29.676 Could not set queue depth (nvme0n3) 00:31:29.676 Could not set queue depth (nvme0n4) 00:31:29.676 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.676 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.676 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.676 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.676 fio-3.35 00:31:29.676 Starting 4 threads 00:31:31.046 00:31:31.046 job0: (groupid=0, jobs=1): err= 0: pid=3874017: Wed Nov 20 19:08:53 2024 00:31:31.046 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:31:31.046 slat (nsec): min=1337, max=9902.6k, avg=83250.77, stdev=650546.70 00:31:31.046 clat (usec): min=3182, max=20008, avg=10756.05, stdev=2789.88 00:31:31.046 lat (usec): min=3186, max=20019, avg=10839.30, stdev=2833.26 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 8848], 00:31:31.046 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10552], 00:31:31.046 | 70.00th=[11600], 80.00th=[13173], 90.00th=[14746], 95.00th=[16909], 00:31:31.046 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19530], 99.95th=[20055], 00:31:31.046 | 99.99th=[20055] 00:31:31.046 write: IOPS=6378, BW=24.9MiB/s (26.1MB/s)(25.1MiB/1006msec); 0 zone resets 00:31:31.046 slat (usec): min=2, max=8642, avg=71.34, stdev=479.88 00:31:31.046 clat (usec): min=1400, max=19593, avg=9588.16, stdev=2366.26 00:31:31.046 lat (usec): min=1412, max=19597, avg=9659.50, stdev=2384.37 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[ 2507], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 7570], 00:31:31.046 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:31:31.046 | 70.00th=[10421], 80.00th=[10683], 90.00th=[12780], 95.00th=[13960], 00:31:31.046 | 99.00th=[15664], 99.50th=[15926], 99.90th=[18744], 99.95th=[18744], 00:31:31.046 | 99.99th=[19530] 00:31:31.046 bw ( KiB/s): min=24480, max=25780, per=34.81%, avg=25130.00, stdev=919.24, samples=2 00:31:31.046 iops : min= 6120, max= 6445, avg=6282.50, stdev=229.81, samples=2 00:31:31.046 lat (msec) : 2=0.40%, 4=0.60%, 10=48.50%, 20=50.49%, 50=0.02% 00:31:31.046 cpu : usr=4.38%, sys=6.77%, ctx=551, majf=0, minf=1 00:31:31.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:31.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:31.046 issued rwts: total=6144,6417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:31.046 job1: (groupid=0, jobs=1): err= 0: pid=3874018: Wed Nov 20 19:08:53 2024 00:31:31.046 read: IOPS=2074, BW=8298KiB/s (8497kB/s)(8364KiB/1008msec) 00:31:31.046 slat (nsec): min=1557, max=28839k, avg=223691.87, stdev=1687686.91 00:31:31.046 clat (usec): min=2471, max=69448, avg=26846.13, stdev=14472.14 00:31:31.046 lat (usec): min=10644, max=69457, avg=27069.82, stdev=14609.45 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[10683], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:31:31.046 | 30.00th=[13566], 40.00th=[15008], 50.00th=[22938], 60.00th=[32900], 00:31:31.046 | 70.00th=[35914], 80.00th=[39584], 90.00th=[45351], 95.00th=[54789], 00:31:31.046 | 99.00th=[64226], 99.50th=[64226], 99.90th=[69731], 99.95th=[69731], 00:31:31.046 | 99.99th=[69731] 00:31:31.046 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:31:31.046 slat (usec): min=5, max=26687, avg=202.05, stdev=1697.41 00:31:31.046 clat (usec): min=7044, max=70817, avg=27397.57, stdev=14147.55 00:31:31.046 lat (usec): min=7054, max=70849, avg=27599.62, stdev=14304.16 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[11863], 5.00th=[12125], 10.00th=[12387], 20.00th=[12649], 00:31:31.046 | 30.00th=[13042], 40.00th=[17171], 50.00th=[28705], 60.00th=[30016], 00:31:31.046 | 70.00th=[34341], 80.00th=[42730], 90.00th=[44303], 95.00th=[54264], 00:31:31.046 | 99.00th=[61080], 99.50th=[61080], 99.90th=[69731], 99.95th=[69731], 00:31:31.046 | 99.99th=[70779] 00:31:31.046 bw ( KiB/s): min= 7512, max=12263, per=13.69%, avg=9887.50, stdev=3359.46, samples=2 00:31:31.046 iops : min= 1878, max= 3065, avg=2471.50, stdev=839.34, samples=2 00:31:31.046 lat (msec) : 4=0.02%, 10=0.52%, 20=43.71%, 50=49.00%, 100=6.75% 00:31:31.046 cpu : usr=2.28%, sys=3.87%, ctx=115, majf=0, minf=1 00:31:31.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:31:31.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:31.046 issued rwts: total=2091,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:31.046 job2: (groupid=0, jobs=1): err= 0: pid=3874020: Wed Nov 20 19:08:53 2024 00:31:31.046 read: IOPS=5363, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1007msec) 00:31:31.046 slat (nsec): min=1346, max=10620k, avg=92178.72, stdev=756310.22 00:31:31.046 clat (usec): min=3538, max=22062, avg=12001.66, stdev=2942.86 00:31:31.046 lat (usec): min=3544, max=27166, avg=12093.84, stdev=2994.44 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[ 6587], 5.00th=[ 7635], 10.00th=[ 9634], 20.00th=[10290], 00:31:31.046 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:31:31.046 | 70.00th=[12518], 80.00th=[14091], 90.00th=[16450], 95.00th=[18744], 00:31:31.046 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21627], 99.95th=[21890], 00:31:31.046 | 99.99th=[22152] 00:31:31.046 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:31:31.046 slat (usec): min=2, max=14116, avg=82.86, stdev=672.73 00:31:31.046 clat (usec): min=2541, max=23467, avg=10894.73, stdev=2388.43 00:31:31.046 lat (usec): min=2552, max=23483, avg=10977.60, stdev=2444.94 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[ 5342], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 9110], 00:31:31.046 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:31:31.046 | 70.00th=[11469], 80.00th=[11731], 90.00th=[15008], 95.00th=[15664], 00:31:31.046 | 99.00th=[17171], 99.50th=[17695], 99.90th=[21103], 99.95th=[21627], 00:31:31.046 | 99.99th=[23462] 00:31:31.046 bw ( KiB/s): min=22395, max=22616, per=31.17%, avg=22505.50, stdev=156.27, samples=2 00:31:31.046 iops : min= 5598, max= 5654, avg=5626.00, stdev=39.60, samples=2 00:31:31.046 lat (msec) : 4=0.35%, 10=22.00%, 20=76.75%, 50=0.90% 00:31:31.046 cpu : usr=5.37%, sys=6.76%, ctx=310, majf=0, minf=1 00:31:31.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:31.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:31.046 issued rwts: total=5401,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:31.046 job3: (groupid=0, jobs=1): err= 0: pid=3874025: Wed Nov 20 19:08:53 2024 00:31:31.046 read: IOPS=3486, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1008msec) 00:31:31.046 slat (usec): min=2, max=16290, avg=138.98, stdev=1099.01 00:31:31.046 clat (usec): min=1829, max=62811, avg=17409.74, stdev=6010.98 00:31:31.046 lat (usec): min=8529, max=62818, avg=17548.71, stdev=6112.03 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[10683], 5.00th=[12125], 10.00th=[13435], 20.00th=[14353], 00:31:31.046 | 30.00th=[14877], 40.00th=[15664], 50.00th=[16319], 60.00th=[16712], 00:31:31.046 | 70.00th=[17171], 80.00th=[18482], 90.00th=[22414], 95.00th=[25035], 00:31:31.046 | 99.00th=[50070], 99.50th=[56886], 99.90th=[62653], 99.95th=[62653], 00:31:31.046 | 99.99th=[62653] 00:31:31.046 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:31:31.046 slat (usec): min=4, max=14386, avg=136.52, stdev=954.41 00:31:31.046 clat (usec): min=6138, max=62818, avg=18519.65, stdev=12178.34 00:31:31.046 lat (usec): min=6150, max=62836, avg=18656.18, stdev=12279.32 00:31:31.046 clat percentiles (usec): 00:31:31.046 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11207], 00:31:31.046 | 30.00th=[11994], 40.00th=[13829], 50.00th=[14877], 60.00th=[15401], 00:31:31.046 | 70.00th=[16712], 80.00th=[20579], 90.00th=[47449], 95.00th=[51119], 00:31:31.046 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[62653], 00:31:31.046 | 99.99th=[62653] 00:31:31.046 bw ( KiB/s): min=12263, max=16384, per=19.84%, avg=14323.50, stdev=2913.99, samples=2 00:31:31.046 iops : min= 3065, max= 4096, avg=3580.50, stdev=729.03, samples=2 00:31:31.046 lat (msec) : 2=0.01%, 10=3.93%, 20=76.91%, 50=14.53%, 100=4.62% 00:31:31.046 cpu : usr=3.57%, sys=5.06%, ctx=189, majf=0, minf=1 00:31:31.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:31.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:31.046 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:31.047 00:31:31.047 Run status group 0 (all jobs): 00:31:31.047 READ: bw=66.5MiB/s (69.7MB/s), 8298KiB/s-23.9MiB/s (8497kB/s-25.0MB/s), io=67.0MiB (70.2MB), run=1006-1008msec 00:31:31.047 WRITE: bw=70.5MiB/s (73.9MB/s), 9.92MiB/s-24.9MiB/s (10.4MB/s-26.1MB/s), io=71.1MiB (74.5MB), run=1006-1008msec 00:31:31.047 00:31:31.047 Disk stats (read/write): 00:31:31.047 nvme0n1: ios=5170/5567, merge=0/0, ticks=52867/51650, in_queue=104517, util=86.67% 00:31:31.047 nvme0n2: ios=2046/2055, merge=0/0, ticks=27987/24354, in_queue=52341, util=100.00% 00:31:31.047 nvme0n3: ios=4643/4638, merge=0/0, ticks=54044/48357, in_queue=102401, util=97.61% 00:31:31.047 nvme0n4: ios=2718/3072, merge=0/0, ticks=47224/57993, in_queue=105217, util=98.01% 00:31:31.047 19:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:31.047 [global] 00:31:31.047 thread=1 00:31:31.047 invalidate=1 00:31:31.047 rw=randwrite 00:31:31.047 time_based=1 00:31:31.047 runtime=1 00:31:31.047 ioengine=libaio 00:31:31.047 direct=1 00:31:31.047 bs=4096 00:31:31.047 iodepth=128 00:31:31.047 norandommap=0 00:31:31.047 numjobs=1 00:31:31.047 00:31:31.047 verify_dump=1 00:31:31.047 verify_backlog=512 00:31:31.047 verify_state_save=0 00:31:31.047 do_verify=1 00:31:31.047 verify=crc32c-intel 00:31:31.047 [job0] 00:31:31.047 filename=/dev/nvme0n1 00:31:31.047 [job1] 00:31:31.047 filename=/dev/nvme0n2 00:31:31.047 [job2] 00:31:31.047 filename=/dev/nvme0n3 00:31:31.047 [job3] 00:31:31.047 filename=/dev/nvme0n4 00:31:31.047 Could not set queue depth (nvme0n1) 00:31:31.047 Could not set queue depth (nvme0n2) 00:31:31.047 Could not set queue depth (nvme0n3) 00:31:31.047 Could not set queue depth (nvme0n4) 00:31:31.303 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.303 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.303 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.303 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.303 fio-3.35 00:31:31.303 Starting 4 threads 00:31:32.675 00:31:32.675 job0: (groupid=0, jobs=1): err= 0: pid=3874391: Wed Nov 20 19:08:54 2024 00:31:32.675 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:31:32.675 slat (nsec): min=1552, max=44455k, avg=145666.84, stdev=1247269.90 00:31:32.675 clat (usec): min=5757, max=69397, avg=19917.04, stdev=13535.88 00:31:32.675 lat (usec): min=5761, max=69573, avg=20062.70, stdev=13610.65 00:31:32.675 clat percentiles (usec): 00:31:32.675 | 1.00th=[ 6849], 5.00th=[ 8225], 10.00th=[ 8356], 20.00th=[10028], 00:31:32.675 | 30.00th=[10814], 40.00th=[12387], 50.00th=[14222], 60.00th=[14484], 00:31:32.675 | 70.00th=[26084], 80.00th=[32113], 90.00th=[41157], 95.00th=[52167], 00:31:32.675 | 99.00th=[55837], 99.50th=[55837], 99.90th=[56361], 99.95th=[61080], 00:31:32.675 | 99.99th=[69731] 00:31:32.675 write: IOPS=3015, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1006msec); 0 zone resets 00:31:32.675 slat (usec): min=2, max=15198, avg=197.33, stdev=1124.45 00:31:32.675 clat (usec): min=4356, max=82752, avg=25086.83, stdev=16445.11 00:31:32.675 lat (usec): min=6019, max=82759, avg=25284.15, stdev=16575.92 00:31:32.675 clat percentiles (usec): 00:31:32.675 | 1.00th=[ 6652], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:31:32.675 | 30.00th=[11863], 40.00th=[14222], 50.00th=[17957], 60.00th=[30016], 00:31:32.675 | 70.00th=[33162], 80.00th=[35914], 90.00th=[42730], 95.00th=[57934], 00:31:32.675 | 99.00th=[80217], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:31:32.675 | 99.99th=[82314] 00:31:32.675 bw ( KiB/s): min=10968, max=12288, per=17.33%, avg=11628.00, stdev=933.38, samples=2 00:31:32.675 iops : min= 2742, max= 3072, avg=2907.00, stdev=233.35, samples=2 00:31:32.675 lat (msec) : 10=11.21%, 20=47.78%, 50=34.82%, 100=6.19% 00:31:32.675 cpu : usr=2.79%, sys=4.18%, ctx=294, majf=0, minf=1 00:31:32.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:32.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.675 issued rwts: total=2560,3034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.675 job1: (groupid=0, jobs=1): err= 0: pid=3874392: Wed Nov 20 19:08:54 2024 00:31:32.675 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:31:32.675 slat (nsec): min=1227, max=16121k, avg=81906.97, stdev=584632.39 00:31:32.675 clat (usec): min=5810, max=42431, avg=10266.53, stdev=4739.99 00:31:32.675 lat (usec): min=5813, max=42437, avg=10348.44, stdev=4783.59 00:31:32.675 clat percentiles (usec): 00:31:32.675 | 1.00th=[ 6325], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7898], 00:31:32.675 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8979], 00:31:32.675 | 70.00th=[ 9765], 80.00th=[10945], 90.00th=[16319], 95.00th=[22938], 00:31:32.675 | 99.00th=[30016], 99.50th=[30540], 99.90th=[41681], 99.95th=[41681], 00:31:32.675 | 99.99th=[42206] 00:31:32.675 write: IOPS=6446, BW=25.2MiB/s (26.4MB/s)(25.3MiB/1003msec); 0 zone resets 00:31:32.675 slat (nsec): min=1872, max=18034k, avg=73644.31, stdev=544845.66 00:31:32.675 clat (usec): min=410, max=47527, avg=9921.93, stdev=4128.57 00:31:32.675 lat (usec): min=1126, max=47535, avg=9995.58, stdev=4175.69 00:31:32.675 clat percentiles (usec): 00:31:32.675 | 1.00th=[ 5538], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 7963], 00:31:32.675 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 9110], 00:31:32.675 | 70.00th=[10028], 80.00th=[10552], 90.00th=[14353], 95.00th=[19792], 00:31:32.675 | 99.00th=[26346], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:31:32.675 | 99.99th=[47449] 00:31:32.675 bw ( KiB/s): min=22536, max=28168, per=37.78%, avg=25352.00, stdev=3982.43, samples=2 00:31:32.675 iops : min= 5634, max= 7042, avg=6338.00, stdev=995.61, samples=2 00:31:32.675 lat (usec) : 500=0.01% 00:31:32.675 lat (msec) : 2=0.02%, 4=0.33%, 10=70.74%, 20=24.10%, 50=4.81% 00:31:32.675 cpu : usr=3.09%, sys=4.29%, ctx=639, majf=0, minf=2 00:31:32.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:32.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.675 issued rwts: total=6144,6466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.675 job2: (groupid=0, jobs=1): err= 0: pid=3874393: Wed Nov 20 19:08:54 2024 00:31:32.675 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:31:32.675 slat (nsec): min=1339, max=19764k, avg=120816.75, stdev=918971.12 00:31:32.675 clat (usec): min=2490, max=60752, avg=15378.49, stdev=6475.58 00:31:32.675 lat (usec): min=2499, max=60761, avg=15499.31, stdev=6570.33 00:31:32.675 clat percentiles (usec): 00:31:32.675 | 1.00th=[ 7111], 5.00th=[ 7832], 10.00th=[10028], 20.00th=[11207], 00:31:32.675 | 30.00th=[12256], 40.00th=[12649], 50.00th=[14091], 60.00th=[15270], 00:31:32.675 | 70.00th=[16319], 80.00th=[18482], 90.00th=[21103], 95.00th=[27395], 00:31:32.675 | 99.00th=[48497], 99.50th=[54789], 99.90th=[60556], 99.95th=[60556], 00:31:32.675 | 99.99th=[60556] 00:31:32.675 write: IOPS=3823, BW=14.9MiB/s (15.7MB/s)(15.1MiB/1010msec); 0 zone resets 00:31:32.675 slat (usec): min=2, max=24066, avg=131.00, stdev=935.51 00:31:32.675 clat (usec): min=1787, max=60731, avg=18846.12, stdev=11131.46 00:31:32.675 lat (usec): min=1795, max=60737, avg=18977.12, stdev=11199.83 00:31:32.675 clat percentiles (usec): 00:31:32.675 | 1.00th=[ 3720], 5.00th=[ 7177], 10.00th=[ 8848], 20.00th=[10814], 00:31:32.675 | 30.00th=[11600], 40.00th=[12256], 50.00th=[14091], 60.00th=[17171], 00:31:32.675 | 70.00th=[20579], 80.00th=[28967], 90.00th=[37487], 95.00th=[42206], 00:31:32.675 | 99.00th=[48497], 99.50th=[50070], 99.90th=[53216], 99.95th=[60556], 00:31:32.675 | 99.99th=[60556] 00:31:32.675 bw ( KiB/s): min=13496, max=16384, per=22.26%, avg=14940.00, stdev=2042.12, samples=2 00:31:32.675 iops : min= 3374, max= 4096, avg=3735.00, stdev=510.53, samples=2 00:31:32.675 lat (msec) : 2=0.09%, 4=0.79%, 10=12.73%, 20=63.24%, 50=22.44% 00:31:32.675 lat (msec) : 100=0.70% 00:31:32.675 cpu : usr=3.27%, sys=4.06%, ctx=325, majf=0, minf=1 00:31:32.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:32.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.675 issued rwts: total=3584,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.675 job3: (groupid=0, jobs=1): err= 0: pid=3874394: Wed Nov 20 19:08:54 2024 00:31:32.675 read: IOPS=3361, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1004msec) 00:31:32.675 slat (nsec): min=1549, max=16370k, avg=130608.28, stdev=962793.86 00:31:32.675 clat (usec): min=1894, max=80978, avg=16960.60, stdev=9019.90 00:31:32.675 lat (usec): min=3544, max=89906, avg=17091.21, stdev=9101.69 00:31:32.675 clat percentiles (usec): 00:31:32.675 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10028], 00:31:32.675 | 30.00th=[10683], 40.00th=[13042], 50.00th=[14091], 60.00th=[16581], 00:31:32.675 | 70.00th=[18482], 80.00th=[23987], 90.00th=[28181], 95.00th=[32113], 00:31:32.675 | 99.00th=[47449], 99.50th=[69731], 99.90th=[81265], 99.95th=[81265], 00:31:32.675 | 99.99th=[81265] 00:31:32.675 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:31:32.675 slat (usec): min=2, max=12949, avg=149.39, stdev=940.93 00:31:32.675 clat (usec): min=1422, max=120127, avg=19517.19, stdev=17728.44 00:31:32.675 lat (usec): min=1433, max=120137, avg=19666.58, stdev=17860.47 00:31:32.675 clat percentiles (msec): 00:31:32.675 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:31:32.675 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 17], 00:31:32.675 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 37], 95.00th=[ 46], 00:31:32.675 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:31:32.675 | 99.99th=[ 121] 00:31:32.675 bw ( KiB/s): min= 8552, max=20120, per=21.36%, avg=14336.00, stdev=8179.81, samples=2 00:31:32.675 iops : min= 2138, max= 5030, avg=3584.00, stdev=2044.95, samples=2 00:31:32.675 lat (msec) : 2=0.04%, 4=0.46%, 10=17.99%, 20=55.57%, 50=22.98% 00:31:32.675 lat (msec) : 100=2.05%, 250=0.91% 00:31:32.675 cpu : usr=3.99%, sys=4.19%, ctx=260, majf=0, minf=1 00:31:32.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:32.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.675 issued rwts: total=3375,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.675 00:31:32.675 Run status group 0 (all jobs): 00:31:32.676 READ: bw=60.6MiB/s (63.5MB/s), 9.94MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=61.2MiB (64.2MB), run=1003-1010msec 00:31:32.676 WRITE: bw=65.5MiB/s (68.7MB/s), 11.8MiB/s-25.2MiB/s (12.4MB/s-26.4MB/s), io=66.2MiB (69.4MB), run=1003-1010msec 00:31:32.676 00:31:32.676 Disk stats (read/write): 00:31:32.676 nvme0n1: ios=2611/2687, merge=0/0, ticks=17171/17808, in_queue=34979, util=97.09% 00:31:32.676 nvme0n2: ios=5120/5150, merge=0/0, ticks=25776/25951, in_queue=51727, util=86.98% 00:31:32.676 nvme0n3: ios=3108/3495, merge=0/0, ticks=42015/53752, in_queue=95767, util=100.00% 00:31:32.676 nvme0n4: ios=2799/3072, merge=0/0, ticks=32442/38024, in_queue=70466, util=98.32% 00:31:32.676 19:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:32.676 19:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3874625 00:31:32.676 19:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:32.676 19:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:32.676 [global] 00:31:32.676 thread=1 00:31:32.676 invalidate=1 00:31:32.676 rw=read 00:31:32.676 time_based=1 00:31:32.676 runtime=10 00:31:32.676 ioengine=libaio 00:31:32.676 direct=1 00:31:32.676 bs=4096 00:31:32.676 iodepth=1 00:31:32.676 norandommap=1 00:31:32.676 numjobs=1 00:31:32.676 00:31:32.676 [job0] 00:31:32.676 filename=/dev/nvme0n1 00:31:32.676 [job1] 00:31:32.676 filename=/dev/nvme0n2 00:31:32.676 [job2] 00:31:32.676 filename=/dev/nvme0n3 00:31:32.676 [job3] 00:31:32.676 filename=/dev/nvme0n4 00:31:32.676 Could not set queue depth (nvme0n1) 00:31:32.676 Could not set queue depth (nvme0n2) 00:31:32.676 Could not set queue depth (nvme0n3) 00:31:32.676 Could not set queue depth (nvme0n4) 00:31:32.933 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.933 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.933 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.933 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.933 fio-3.35 00:31:32.933 Starting 4 threads 00:31:36.204 19:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:36.204 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29593600, buflen=4096 00:31:36.204 fio: pid=3874770, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:36.204 19:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:36.204 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.204 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:36.204 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=311296, buflen=4096 00:31:36.204 fio: pid=3874769, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:36.204 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1617920, buflen=4096 00:31:36.204 fio: pid=3874767, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:36.204 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.204 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:36.461 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=335872, buflen=4096 00:31:36.461 fio: pid=3874768, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:36.461 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.461 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:36.461 00:31:36.461 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3874767: Wed Nov 20 19:08:58 2024 00:31:36.461 read: IOPS=127, BW=507KiB/s (519kB/s)(1580KiB/3115msec) 00:31:36.461 slat (usec): min=6, max=12725, avg=41.89, stdev=639.00 00:31:36.461 clat (usec): min=206, max=42093, avg=7787.28, stdev=15846.08 00:31:36.461 lat (usec): min=213, max=54078, avg=7829.22, stdev=15932.89 00:31:36.461 clat percentiles (usec): 00:31:36.461 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 247], 00:31:36.461 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 253], 00:31:36.461 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:31:36.461 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:36.461 | 99.99th=[42206] 00:31:36.461 bw ( KiB/s): min= 96, max= 2648, per=5.57%, avg=523.17, stdev=1040.96, samples=6 00:31:36.461 iops : min= 24, max= 662, avg=130.67, stdev=260.30, samples=6 00:31:36.461 lat (usec) : 250=40.15%, 500=40.91%, 750=0.25% 00:31:36.461 lat (msec) : 50=18.43% 00:31:36.461 cpu : usr=0.00%, sys=0.19%, ctx=398, majf=0, minf=1 00:31:36.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.461 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.461 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.461 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3874768: Wed Nov 20 19:08:58 2024 00:31:36.461 read: IOPS=25, BW=99.0KiB/s (101kB/s)(328KiB/3313msec) 00:31:36.461 slat (usec): min=10, max=11744, avg=211.32, stdev=1347.22 00:31:36.461 clat (usec): min=440, max=42089, avg=40022.85, stdev=6266.31 00:31:36.461 lat (usec): min=473, max=52980, avg=40236.48, stdev=6446.25 00:31:36.461 clat percentiles (usec): 00:31:36.461 | 1.00th=[ 441], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:36.461 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:36.461 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:36.461 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:36.461 | 99.99th=[42206] 00:31:36.461 bw ( KiB/s): min= 96, max= 106, per=1.05%, avg=99.00, stdev= 4.69, samples=6 00:31:36.461 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:31:36.461 lat (usec) : 500=1.20%, 1000=1.20% 00:31:36.461 lat (msec) : 50=96.39% 00:31:36.461 cpu : usr=0.12%, sys=0.00%, ctx=88, majf=0, minf=2 00:31:36.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.461 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.462 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.462 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3874769: Wed Nov 20 19:08:58 2024 00:31:36.462 read: IOPS=26, BW=103KiB/s (106kB/s)(304KiB/2943msec) 00:31:36.462 slat (nsec): min=9414, max=32595, avg=22336.39, stdev=3017.87 00:31:36.462 clat (usec): min=371, max=42099, avg=38408.10, stdev=10010.65 00:31:36.462 lat (usec): min=397, max=42121, avg=38430.42, stdev=10010.02 00:31:36.462 clat percentiles (usec): 00:31:36.462 | 1.00th=[ 371], 5.00th=[ 775], 10.00th=[40633], 20.00th=[40633], 00:31:36.462 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:36.462 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:36.462 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:36.462 | 99.99th=[42206] 00:31:36.462 bw ( KiB/s): min= 96, max= 120, per=1.11%, avg=104.00, stdev=11.31, samples=5 00:31:36.462 iops : min= 24, max= 30, avg=26.00, stdev= 2.83, samples=5 00:31:36.462 lat (usec) : 500=3.90%, 1000=1.30% 00:31:36.462 lat (msec) : 4=1.30%, 50=92.21% 00:31:36.462 cpu : usr=0.10%, sys=0.00%, ctx=77, majf=0, minf=2 00:31:36.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.462 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.462 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.462 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3874770: Wed Nov 20 19:08:58 2024 00:31:36.462 read: IOPS=2665, BW=10.4MiB/s (10.9MB/s)(28.2MiB/2711msec) 00:31:36.462 slat (nsec): min=6319, max=42379, avg=8079.52, stdev=1750.77 00:31:36.462 clat (usec): min=165, max=41041, avg=362.23, stdev=2521.30 00:31:36.462 lat (usec): min=184, max=41063, avg=370.31, stdev=2522.11 00:31:36.462 clat percentiles (usec): 00:31:36.462 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 194], 00:31:36.462 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:31:36.462 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 253], 95.00th=[ 260], 00:31:36.462 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[41157], 99.95th=[41157], 00:31:36.462 | 99.99th=[41157] 00:31:36.462 bw ( KiB/s): min= 96, max=19448, per=100.00%, avg=10177.60, stdev=8146.48, samples=5 00:31:36.462 iops : min= 24, max= 4862, avg=2544.40, stdev=2036.62, samples=5 00:31:36.462 lat (usec) : 250=89.55%, 500=10.03%, 750=0.01% 00:31:36.462 lat (msec) : 50=0.39% 00:31:36.462 cpu : usr=1.59%, sys=3.80%, ctx=7226, majf=0, minf=2 00:31:36.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.462 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.462 issued rwts: total=7226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.462 00:31:36.462 Run status group 0 (all jobs): 00:31:36.462 READ: bw=9391KiB/s (9616kB/s), 99.0KiB/s-10.4MiB/s (101kB/s-10.9MB/s), io=30.4MiB (31.9MB), run=2711-3313msec 00:31:36.462 00:31:36.462 Disk stats (read/write): 00:31:36.462 nvme0n1: ios=396/0, merge=0/0, ticks=3088/0, in_queue=3088, util=95.25% 00:31:36.462 nvme0n2: ios=110/0, merge=0/0, ticks=3903/0, in_queue=3903, util=99.78% 00:31:36.462 nvme0n3: ios=74/0, merge=0/0, ticks=2839/0, in_queue=2839, util=96.52% 00:31:36.462 nvme0n4: ios=6828/0, merge=0/0, ticks=2439/0, in_queue=2439, util=96.45% 00:31:36.719 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.719 19:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:36.719 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.719 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:36.976 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.976 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:37.232 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:37.232 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3874625 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:37.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:37.489 nvmf hotplug test: fio failed as expected 00:31:37.489 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.747 19:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.747 rmmod nvme_tcp 00:31:37.747 rmmod nvme_fabrics 00:31:37.747 rmmod nvme_keyring 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3871934 ']' 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3871934 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3871934 ']' 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3871934 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.747 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3871934 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3871934' 00:31:38.006 killing process with pid 3871934 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3871934 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3871934 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.006 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.007 19:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.544 00:31:40.544 real 0m26.434s 00:31:40.544 user 1m31.220s 00:31:40.544 sys 0m10.950s 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:40.544 ************************************ 00:31:40.544 END TEST nvmf_fio_target 00:31:40.544 ************************************ 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.544 ************************************ 00:31:40.544 START TEST nvmf_bdevio 00:31:40.544 ************************************ 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:40.544 * Looking for test storage... 00:31:40.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.544 --rc genhtml_branch_coverage=1 00:31:40.544 --rc genhtml_function_coverage=1 00:31:40.544 --rc genhtml_legend=1 00:31:40.544 --rc geninfo_all_blocks=1 00:31:40.544 --rc geninfo_unexecuted_blocks=1 00:31:40.544 00:31:40.544 ' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.544 --rc genhtml_branch_coverage=1 00:31:40.544 --rc genhtml_function_coverage=1 00:31:40.544 --rc genhtml_legend=1 00:31:40.544 --rc geninfo_all_blocks=1 00:31:40.544 --rc geninfo_unexecuted_blocks=1 00:31:40.544 00:31:40.544 ' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.544 --rc genhtml_branch_coverage=1 00:31:40.544 --rc genhtml_function_coverage=1 00:31:40.544 --rc genhtml_legend=1 00:31:40.544 --rc geninfo_all_blocks=1 00:31:40.544 --rc geninfo_unexecuted_blocks=1 00:31:40.544 00:31:40.544 ' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.544 --rc genhtml_branch_coverage=1 00:31:40.544 --rc genhtml_function_coverage=1 00:31:40.544 --rc genhtml_legend=1 00:31:40.544 --rc geninfo_all_blocks=1 00:31:40.544 --rc geninfo_unexecuted_blocks=1 00:31:40.544 00:31:40.544 ' 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:40.544 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.545 19:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:47.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:47.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:47.116 Found net devices under 0000:86:00.0: cvl_0_0 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:47.116 Found net devices under 0000:86:00.1: cvl_0_1 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.116 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:47.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:31:47.117 00:31:47.117 --- 10.0.0.2 ping statistics --- 00:31:47.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.117 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:31:47.117 00:31:47.117 --- 10.0.0.1 ping statistics --- 00:31:47.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.117 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3879517 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3879517 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3879517 ']' 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 [2024-11-20 19:09:08.605696] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:47.117 [2024-11-20 19:09:08.606588] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:31:47.117 [2024-11-20 19:09:08.606619] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.117 [2024-11-20 19:09:08.684576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:47.117 [2024-11-20 19:09:08.727785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.117 [2024-11-20 19:09:08.727816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.117 [2024-11-20 19:09:08.727824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.117 [2024-11-20 19:09:08.727831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.117 [2024-11-20 19:09:08.727836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.117 [2024-11-20 19:09:08.729246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:47.117 [2024-11-20 19:09:08.729355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:47.117 [2024-11-20 19:09:08.729465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:47.117 [2024-11-20 19:09:08.729465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:47.117 [2024-11-20 19:09:08.796399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:47.117 [2024-11-20 19:09:08.797063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:47.117 [2024-11-20 19:09:08.797364] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:47.117 [2024-11-20 19:09:08.797744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:47.117 [2024-11-20 19:09:08.797778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 [2024-11-20 19:09:08.878128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 Malloc0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.117 [2024-11-20 19:09:08.962408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:47.117 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:47.118 { 00:31:47.118 "params": { 00:31:47.118 "name": "Nvme$subsystem", 00:31:47.118 "trtype": "$TEST_TRANSPORT", 00:31:47.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.118 "adrfam": "ipv4", 00:31:47.118 "trsvcid": "$NVMF_PORT", 00:31:47.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.118 "hdgst": ${hdgst:-false}, 00:31:47.118 "ddgst": ${ddgst:-false} 00:31:47.118 }, 00:31:47.118 "method": "bdev_nvme_attach_controller" 00:31:47.118 } 00:31:47.118 EOF 00:31:47.118 )") 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:47.118 19:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:47.118 "params": { 00:31:47.118 "name": "Nvme1", 00:31:47.118 "trtype": "tcp", 00:31:47.118 "traddr": "10.0.0.2", 00:31:47.118 "adrfam": "ipv4", 00:31:47.118 "trsvcid": "4420", 00:31:47.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:47.118 "hdgst": false, 00:31:47.118 "ddgst": false 00:31:47.118 }, 00:31:47.118 "method": "bdev_nvme_attach_controller" 00:31:47.118 }' 00:31:47.118 [2024-11-20 19:09:09.008457] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:31:47.118 [2024-11-20 19:09:09.008511] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879663 ] 00:31:47.118 [2024-11-20 19:09:09.086853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:47.118 [2024-11-20 19:09:09.130460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.118 [2024-11-20 19:09:09.130567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.118 [2024-11-20 19:09:09.130567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:47.118 I/O targets: 00:31:47.118 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:47.118 00:31:47.118 00:31:47.118 CUnit - A unit testing framework for C - Version 2.1-3 00:31:47.118 http://cunit.sourceforge.net/ 00:31:47.118 00:31:47.118 00:31:47.118 Suite: bdevio tests on: Nvme1n1 00:31:47.375 Test: blockdev write read block ...passed 00:31:47.375 Test: blockdev write zeroes read block ...passed 00:31:47.375 Test: blockdev write zeroes read no split ...passed 00:31:47.375 Test: blockdev write zeroes read split ...passed 00:31:47.375 Test: blockdev write zeroes read split partial ...passed 00:31:47.375 Test: blockdev reset ...[2024-11-20 19:09:09.554798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:47.375 [2024-11-20 19:09:09.554859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c24340 (9): Bad file descriptor 00:31:47.375 [2024-11-20 19:09:09.600276] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:47.375 passed 00:31:47.375 Test: blockdev write read 8 blocks ...passed 00:31:47.375 Test: blockdev write read size > 128k ...passed 00:31:47.375 Test: blockdev write read invalid size ...passed 00:31:47.375 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:47.375 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:47.375 Test: blockdev write read max offset ...passed 00:31:47.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:47.633 Test: blockdev writev readv 8 blocks ...passed 00:31:47.633 Test: blockdev writev readv 30 x 1block ...passed 00:31:47.633 Test: blockdev writev readv block ...passed 00:31:47.633 Test: blockdev writev readv size > 128k ...passed 00:31:47.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:47.633 Test: blockdev comparev and writev ...[2024-11-20 19:09:09.815390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.815415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.815429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.815437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.815721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.815731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.815743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.815749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.816022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.816032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.816045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.816053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.816335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.816348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.816359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.633 [2024-11-20 19:09:09.816366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:47.633 passed 00:31:47.633 Test: blockdev nvme passthru rw ...passed 00:31:47.633 Test: blockdev nvme passthru vendor specific ...[2024-11-20 19:09:09.899546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.633 [2024-11-20 19:09:09.899561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.899669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.633 [2024-11-20 19:09:09.899678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.899781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.633 [2024-11-20 19:09:09.899790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:47.633 [2024-11-20 19:09:09.899895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.633 [2024-11-20 19:09:09.899904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:47.633 passed 00:31:47.633 Test: blockdev nvme admin passthru ...passed 00:31:47.891 Test: blockdev copy ...passed 00:31:47.891 00:31:47.891 Run Summary: Type Total Ran Passed Failed Inactive 00:31:47.891 suites 1 1 n/a 0 0 00:31:47.891 tests 23 23 23 0 0 00:31:47.891 asserts 152 152 152 0 n/a 00:31:47.891 00:31:47.891 Elapsed time = 1.108 seconds 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.891 rmmod nvme_tcp 00:31:47.891 rmmod nvme_fabrics 00:31:47.891 rmmod nvme_keyring 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3879517 ']' 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3879517 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3879517 ']' 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3879517 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3879517 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:47.891 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3879517' 00:31:47.891 killing process with pid 3879517 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3879517 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3879517 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.150 19:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.685 19:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.685 00:31:50.685 real 0m10.056s 00:31:50.685 user 0m9.096s 00:31:50.685 sys 0m5.352s 00:31:50.685 19:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.685 19:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:50.685 ************************************ 00:31:50.685 END TEST nvmf_bdevio 00:31:50.685 ************************************ 00:31:50.685 19:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:50.685 00:31:50.685 real 4m34.244s 00:31:50.685 user 9m6.437s 00:31:50.685 sys 1m51.288s 00:31:50.685 19:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.685 19:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:50.685 ************************************ 00:31:50.685 END TEST nvmf_target_core_interrupt_mode 00:31:50.685 ************************************ 00:31:50.685 19:09:12 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:50.685 19:09:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:50.685 19:09:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.685 19:09:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:50.685 ************************************ 00:31:50.685 START TEST nvmf_interrupt 00:31:50.685 ************************************ 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:50.685 * Looking for test storage... 00:31:50.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.685 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.686 --rc genhtml_branch_coverage=1 00:31:50.686 --rc genhtml_function_coverage=1 00:31:50.686 --rc genhtml_legend=1 00:31:50.686 --rc geninfo_all_blocks=1 00:31:50.686 --rc geninfo_unexecuted_blocks=1 00:31:50.686 00:31:50.686 ' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.686 --rc genhtml_branch_coverage=1 00:31:50.686 --rc genhtml_function_coverage=1 00:31:50.686 --rc genhtml_legend=1 00:31:50.686 --rc geninfo_all_blocks=1 00:31:50.686 --rc geninfo_unexecuted_blocks=1 00:31:50.686 00:31:50.686 ' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.686 --rc genhtml_branch_coverage=1 00:31:50.686 --rc genhtml_function_coverage=1 00:31:50.686 --rc genhtml_legend=1 00:31:50.686 --rc geninfo_all_blocks=1 00:31:50.686 --rc geninfo_unexecuted_blocks=1 00:31:50.686 00:31:50.686 ' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.686 --rc genhtml_branch_coverage=1 00:31:50.686 --rc genhtml_function_coverage=1 00:31:50.686 --rc genhtml_legend=1 00:31:50.686 --rc geninfo_all_blocks=1 00:31:50.686 --rc geninfo_unexecuted_blocks=1 00:31:50.686 00:31:50.686 ' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.686 19:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:57.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:57.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.291 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:57.292 Found net devices under 0000:86:00.0: cvl_0_0 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:57.292 Found net devices under 0000:86:00.1: cvl_0_1 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:31:57.292 00:31:57.292 --- 10.0.0.2 ping statistics --- 00:31:57.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.292 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:57.292 00:31:57.292 --- 10.0.0.1 ping statistics --- 00:31:57.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.292 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3883305 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3883305 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3883305 ']' 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.292 19:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.292 [2024-11-20 19:09:18.728471] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:57.292 [2024-11-20 19:09:18.729480] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:31:57.292 [2024-11-20 19:09:18.729520] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.292 [2024-11-20 19:09:18.825615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:57.292 [2024-11-20 19:09:18.864739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.292 [2024-11-20 19:09:18.864774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.292 [2024-11-20 19:09:18.864781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.292 [2024-11-20 19:09:18.864786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.292 [2024-11-20 19:09:18.864791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.292 [2024-11-20 19:09:18.866003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.292 [2024-11-20 19:09:18.866004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.292 [2024-11-20 19:09:18.934401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:57.292 [2024-11-20 19:09:18.934938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:57.292 [2024-11-20 19:09:18.935125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:57.292 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.292 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:57.292 19:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:57.292 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:57.292 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:57.606 5000+0 records in 00:31:57.606 5000+0 records out 00:31:57.606 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0189244 s, 541 MB/s 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.606 AIO0 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.606 [2024-11-20 19:09:19.666803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:57.606 [2024-11-20 19:09:19.703072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.606 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3883305 0 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3883305 0 idle 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883305 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.27 reactor_0' 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883305 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.27 reactor_0 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3883305 1 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3883305 1 idle 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:57.607 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:57.866 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:57.866 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:57.866 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:31:57.866 19:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883315 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1' 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883315 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:57.866 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3883579 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3883305 0 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3883305 0 busy 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:31:57.867 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883305 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.46 reactor_0' 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883305 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.46 reactor_0 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3883305 1 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3883305 1 busy 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883315 root 20 0 128.2g 46080 33792 R 93.3 0.0 0:00.27 reactor_1' 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883315 root 20 0 128.2g 46080 33792 R 93.3 0.0 0:00.27 reactor_1 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:58.126 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:58.385 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:31:58.385 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:31:58.385 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:58.385 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:58.385 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:58.385 19:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:58.385 19:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3883579 00:32:08.355 Initializing NVMe Controllers 00:32:08.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:08.355 Controller IO queue size 256, less than required. 00:32:08.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:08.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:08.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:08.355 Initialization complete. Launching workers. 00:32:08.355 ======================================================== 00:32:08.355 Latency(us) 00:32:08.355 Device Information : IOPS MiB/s Average min max 00:32:08.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16953.70 66.23 15108.14 2956.73 56119.66 00:32:08.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16690.80 65.20 15342.13 7630.60 25777.12 00:32:08.355 ======================================================== 00:32:08.355 Total : 33644.50 131.42 15224.22 2956.73 56119.66 00:32:08.355 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3883305 0 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3883305 0 idle 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883305 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:20.26 reactor_0' 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883305 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:20.26 reactor_0 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.355 19:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3883305 1 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3883305 1 idle 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883315 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883315 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.356 19:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:08.925 19:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:08.925 19:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:08.925 19:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:08.925 19:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:08.925 19:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3883305 0 00:32:10.832 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3883305 0 idle 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:32:10.833 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883305 root 20 0 128.2g 72192 33792 S 6.7 0.0 0:20.54 reactor_0' 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883305 root 20 0 128.2g 72192 33792 S 6.7 0.0 0:20.54 reactor_0 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3883305 1 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3883305 1 idle 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3883305 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3883305 -w 256 00:32:11.092 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3883315 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3883315 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:11.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:11.351 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.352 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.352 rmmod nvme_tcp 00:32:11.352 rmmod nvme_fabrics 00:32:11.352 rmmod nvme_keyring 00:32:11.610 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:11.610 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:11.610 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:11.610 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3883305 ']' 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3883305 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3883305 ']' 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3883305 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3883305 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3883305' 00:32:11.611 killing process with pid 3883305 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3883305 00:32:11.611 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3883305 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:11.869 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:11.870 19:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.870 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.870 19:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.776 19:09:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:13.776 00:32:13.776 real 0m23.433s 00:32:13.776 user 0m39.842s 00:32:13.776 sys 0m8.460s 00:32:13.776 19:09:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.776 19:09:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:13.776 ************************************ 00:32:13.776 END TEST nvmf_interrupt 00:32:13.776 ************************************ 00:32:13.776 00:32:13.776 real 27m27.179s 00:32:13.776 user 56m11.647s 00:32:13.776 sys 9m24.481s 00:32:13.776 19:09:36 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.776 19:09:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.776 ************************************ 00:32:13.776 END TEST nvmf_tcp 00:32:13.776 ************************************ 00:32:13.776 19:09:36 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:13.776 19:09:36 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:13.776 19:09:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:13.776 19:09:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.776 19:09:36 -- common/autotest_common.sh@10 -- # set +x 00:32:14.036 ************************************ 00:32:14.036 START TEST spdkcli_nvmf_tcp 00:32:14.036 ************************************ 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:14.036 * Looking for test storage... 00:32:14.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.036 --rc genhtml_branch_coverage=1 00:32:14.036 --rc genhtml_function_coverage=1 00:32:14.036 --rc genhtml_legend=1 00:32:14.036 --rc geninfo_all_blocks=1 00:32:14.036 --rc geninfo_unexecuted_blocks=1 00:32:14.036 00:32:14.036 ' 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.036 --rc genhtml_branch_coverage=1 00:32:14.036 --rc genhtml_function_coverage=1 00:32:14.036 --rc genhtml_legend=1 00:32:14.036 --rc geninfo_all_blocks=1 00:32:14.036 --rc geninfo_unexecuted_blocks=1 00:32:14.036 00:32:14.036 ' 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.036 --rc genhtml_branch_coverage=1 00:32:14.036 --rc genhtml_function_coverage=1 00:32:14.036 --rc genhtml_legend=1 00:32:14.036 --rc geninfo_all_blocks=1 00:32:14.036 --rc geninfo_unexecuted_blocks=1 00:32:14.036 00:32:14.036 ' 00:32:14.036 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.037 --rc genhtml_branch_coverage=1 00:32:14.037 --rc genhtml_function_coverage=1 00:32:14.037 --rc genhtml_legend=1 00:32:14.037 --rc geninfo_all_blocks=1 00:32:14.037 --rc geninfo_unexecuted_blocks=1 00:32:14.037 00:32:14.037 ' 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:14.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3886269 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3886269 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3886269 ']' 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.037 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.296 [2024-11-20 19:09:36.384500] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:32:14.296 [2024-11-20 19:09:36.384551] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886269 ] 00:32:14.296 [2024-11-20 19:09:36.457150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:14.296 [2024-11-20 19:09:36.500448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.296 [2024-11-20 19:09:36.500450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.296 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.296 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:14.296 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:14.296 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.296 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.556 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:14.556 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:14.556 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:14.556 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.556 19:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.556 19:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:14.556 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:14.556 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:14.556 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:14.556 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:14.556 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:14.556 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:14.556 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:14.556 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:14.556 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:14.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:14.556 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:14.556 ' 00:32:17.093 [2024-11-20 19:09:39.323741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.465 [2024-11-20 19:09:40.660208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:20.994 [2024-11-20 19:09:43.151802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:23.528 [2024-11-20 19:09:45.322465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:24.905 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:24.905 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:24.905 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:24.905 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:24.905 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:24.905 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:24.905 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:24.905 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:24.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:24.906 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:24.906 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:24.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:24.906 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:24.906 19:09:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.472 19:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:25.472 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:25.472 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:25.472 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:25.472 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:25.472 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:25.472 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:25.472 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:25.472 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:25.473 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:25.473 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:25.473 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:25.473 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:25.473 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:25.473 ' 00:32:32.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:32.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:32.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:32.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:32.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:32.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:32.035 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:32.035 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:32.035 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:32.035 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:32.035 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:32.035 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:32.035 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:32.035 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3886269 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3886269 ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3886269 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3886269 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3886269' 00:32:32.035 killing process with pid 3886269 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3886269 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3886269 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3886269 ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3886269 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3886269 ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3886269 00:32:32.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3886269) - No such process 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3886269 is not found' 00:32:32.035 Process with pid 3886269 is not found 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:32.035 00:32:32.035 real 0m17.355s 00:32:32.035 user 0m38.340s 00:32:32.035 sys 0m0.756s 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.035 19:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.035 ************************************ 00:32:32.035 END TEST spdkcli_nvmf_tcp 00:32:32.035 ************************************ 00:32:32.035 19:09:53 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:32.035 19:09:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:32.035 19:09:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.035 19:09:53 -- common/autotest_common.sh@10 -- # set +x 00:32:32.035 ************************************ 00:32:32.035 START TEST nvmf_identify_passthru 00:32:32.035 ************************************ 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:32.035 * Looking for test storage... 00:32:32.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.035 19:09:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:32.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.035 --rc genhtml_branch_coverage=1 00:32:32.035 --rc genhtml_function_coverage=1 00:32:32.035 --rc genhtml_legend=1 00:32:32.035 --rc geninfo_all_blocks=1 00:32:32.035 --rc geninfo_unexecuted_blocks=1 00:32:32.035 00:32:32.035 ' 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:32.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.035 --rc genhtml_branch_coverage=1 00:32:32.035 --rc genhtml_function_coverage=1 00:32:32.035 --rc genhtml_legend=1 00:32:32.035 --rc geninfo_all_blocks=1 00:32:32.035 --rc geninfo_unexecuted_blocks=1 00:32:32.035 00:32:32.035 ' 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:32.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.035 --rc genhtml_branch_coverage=1 00:32:32.035 --rc genhtml_function_coverage=1 00:32:32.035 --rc genhtml_legend=1 00:32:32.035 --rc geninfo_all_blocks=1 00:32:32.035 --rc geninfo_unexecuted_blocks=1 00:32:32.035 00:32:32.035 ' 00:32:32.035 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:32.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.035 --rc genhtml_branch_coverage=1 00:32:32.035 --rc genhtml_function_coverage=1 00:32:32.035 --rc genhtml_legend=1 00:32:32.035 --rc geninfo_all_blocks=1 00:32:32.035 --rc geninfo_unexecuted_blocks=1 00:32:32.035 00:32:32.035 ' 00:32:32.035 19:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.035 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:32.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.036 19:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.036 19:09:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:32.036 19:09:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.036 19:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.036 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:32.036 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:32.036 19:09:53 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.036 19:09:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:37.312 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:37.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:37.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:37.313 Found net devices under 0000:86:00.0: cvl_0_0 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:37.313 Found net devices under 0000:86:00.1: cvl_0_1 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:37.313 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:37.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:32:37.573 00:32:37.573 --- 10.0.0.2 ping statistics --- 00:32:37.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.573 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:32:37.573 00:32:37.573 --- 10.0.0.1 ping statistics --- 00:32:37.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.573 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:37.573 19:09:59 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:37.573 19:09:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:37.573 19:09:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:37.573 19:09:59 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:37.573 19:09:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:37.573 19:09:59 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:37.573 19:09:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:37.573 19:09:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:37.573 19:09:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:42.846 19:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:32:42.846 19:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:42.846 19:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:42.846 19:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:47.040 19:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:47.040 19:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.040 19:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.040 19:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3893750 00:32:47.040 19:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:47.040 19:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:47.040 19:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3893750 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3893750 ']' 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.040 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.041 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.041 19:10:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.041 [2024-11-20 19:10:09.346939] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:32:47.041 [2024-11-20 19:10:09.346981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.298 [2024-11-20 19:10:09.427062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:47.298 [2024-11-20 19:10:09.469835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.298 [2024-11-20 19:10:09.469870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.298 [2024-11-20 19:10:09.469880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.298 [2024-11-20 19:10:09.469886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.298 [2024-11-20 19:10:09.469891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.298 [2024-11-20 19:10:09.471390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.298 [2024-11-20 19:10:09.471501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.298 [2024-11-20 19:10:09.471606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.298 [2024-11-20 19:10:09.471607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:48.231 19:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.231 INFO: Log level set to 20 00:32:48.231 INFO: Requests: 00:32:48.231 { 00:32:48.231 "jsonrpc": "2.0", 00:32:48.231 "method": "nvmf_set_config", 00:32:48.231 "id": 1, 00:32:48.231 "params": { 00:32:48.231 "admin_cmd_passthru": { 00:32:48.231 "identify_ctrlr": true 00:32:48.231 } 00:32:48.231 } 00:32:48.231 } 00:32:48.231 00:32:48.231 INFO: response: 00:32:48.231 { 00:32:48.231 "jsonrpc": "2.0", 00:32:48.231 "id": 1, 00:32:48.231 "result": true 00:32:48.231 } 00:32:48.231 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.231 19:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.231 INFO: Setting log level to 20 00:32:48.231 INFO: Setting log level to 20 00:32:48.231 INFO: Log level set to 20 00:32:48.231 INFO: Log level set to 20 00:32:48.231 INFO: Requests: 00:32:48.231 { 00:32:48.231 "jsonrpc": "2.0", 00:32:48.231 "method": "framework_start_init", 00:32:48.231 "id": 1 00:32:48.231 } 00:32:48.231 00:32:48.231 INFO: Requests: 00:32:48.231 { 00:32:48.231 "jsonrpc": "2.0", 00:32:48.231 "method": "framework_start_init", 00:32:48.231 "id": 1 00:32:48.231 } 00:32:48.231 00:32:48.231 [2024-11-20 19:10:10.278881] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:48.231 INFO: response: 00:32:48.231 { 00:32:48.231 "jsonrpc": "2.0", 00:32:48.231 "id": 1, 00:32:48.231 "result": true 00:32:48.231 } 00:32:48.231 00:32:48.231 INFO: response: 00:32:48.231 { 00:32:48.231 "jsonrpc": "2.0", 00:32:48.231 "id": 1, 00:32:48.231 "result": true 00:32:48.231 } 00:32:48.231 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.231 19:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.231 INFO: Setting log level to 40 00:32:48.231 INFO: Setting log level to 40 00:32:48.231 INFO: Setting log level to 40 00:32:48.231 [2024-11-20 19:10:10.292203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.231 19:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.231 19:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.231 19:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.511 Nvme0n1 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.511 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.511 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.511 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.511 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.511 [2024-11-20 19:10:13.205691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.512 [ 00:32:51.512 { 00:32:51.512 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:51.512 "subtype": "Discovery", 00:32:51.512 "listen_addresses": [], 00:32:51.512 "allow_any_host": true, 00:32:51.512 "hosts": [] 00:32:51.512 }, 00:32:51.512 { 00:32:51.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:51.512 "subtype": "NVMe", 00:32:51.512 "listen_addresses": [ 00:32:51.512 { 00:32:51.512 "trtype": "TCP", 00:32:51.512 "adrfam": "IPv4", 00:32:51.512 "traddr": "10.0.0.2", 00:32:51.512 "trsvcid": "4420" 00:32:51.512 } 00:32:51.512 ], 00:32:51.512 "allow_any_host": true, 00:32:51.512 "hosts": [], 00:32:51.512 "serial_number": "SPDK00000000000001", 00:32:51.512 "model_number": "SPDK bdev Controller", 00:32:51.512 "max_namespaces": 1, 00:32:51.512 "min_cntlid": 1, 00:32:51.512 "max_cntlid": 65519, 00:32:51.512 "namespaces": [ 00:32:51.512 { 00:32:51.512 "nsid": 1, 00:32:51.512 "bdev_name": "Nvme0n1", 00:32:51.512 "name": "Nvme0n1", 00:32:51.512 "nguid": "AC955D53296D4B07B323911EA1EBE4EC", 00:32:51.512 "uuid": "ac955d53-296d-4b07-b323-911ea1ebe4ec" 00:32:51.512 } 00:32:51.512 ] 00:32:51.512 } 00:32:51.512 ] 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:51.512 19:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.512 rmmod nvme_tcp 00:32:51.512 rmmod nvme_fabrics 00:32:51.512 rmmod nvme_keyring 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3893750 ']' 00:32:51.512 19:10:13 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3893750 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3893750 ']' 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3893750 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3893750 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3893750' 00:32:51.512 killing process with pid 3893750 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3893750 00:32:51.512 19:10:13 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3893750 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:54.036 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.037 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.037 19:10:15 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.037 19:10:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:54.037 19:10:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.942 19:10:17 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:55.942 00:32:55.942 real 0m24.305s 00:32:55.942 user 0m33.068s 00:32:55.942 sys 0m6.380s 00:32:55.942 19:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:55.942 19:10:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.942 ************************************ 00:32:55.942 END TEST nvmf_identify_passthru 00:32:55.942 ************************************ 00:32:55.942 19:10:17 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:55.942 19:10:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:55.942 19:10:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:55.942 19:10:17 -- common/autotest_common.sh@10 -- # set +x 00:32:55.942 ************************************ 00:32:55.942 START TEST nvmf_dif 00:32:55.942 ************************************ 00:32:55.942 19:10:17 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:55.942 * Looking for test storage... 00:32:55.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:55.942 19:10:18 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:55.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.942 --rc genhtml_branch_coverage=1 00:32:55.942 --rc genhtml_function_coverage=1 00:32:55.942 --rc genhtml_legend=1 00:32:55.942 --rc geninfo_all_blocks=1 00:32:55.942 --rc geninfo_unexecuted_blocks=1 00:32:55.942 00:32:55.942 ' 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:55.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.942 --rc genhtml_branch_coverage=1 00:32:55.942 --rc genhtml_function_coverage=1 00:32:55.942 --rc genhtml_legend=1 00:32:55.942 --rc geninfo_all_blocks=1 00:32:55.942 --rc geninfo_unexecuted_blocks=1 00:32:55.942 00:32:55.942 ' 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:55.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.942 --rc genhtml_branch_coverage=1 00:32:55.942 --rc genhtml_function_coverage=1 00:32:55.942 --rc genhtml_legend=1 00:32:55.942 --rc geninfo_all_blocks=1 00:32:55.942 --rc geninfo_unexecuted_blocks=1 00:32:55.942 00:32:55.942 ' 00:32:55.942 19:10:18 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:55.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.942 --rc genhtml_branch_coverage=1 00:32:55.942 --rc genhtml_function_coverage=1 00:32:55.942 --rc genhtml_legend=1 00:32:55.942 --rc geninfo_all_blocks=1 00:32:55.942 --rc geninfo_unexecuted_blocks=1 00:32:55.942 00:32:55.942 ' 00:32:55.942 19:10:18 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:55.942 19:10:18 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.943 19:10:18 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:55.943 19:10:18 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.943 19:10:18 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.943 19:10:18 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.943 19:10:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.943 19:10:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.943 19:10:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.943 19:10:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:55.943 19:10:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:55.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:55.943 19:10:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:55.943 19:10:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:55.943 19:10:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:55.943 19:10:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:55.943 19:10:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.943 19:10:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:55.943 19:10:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:55.943 19:10:18 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:55.943 19:10:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:02.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:02.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:02.518 Found net devices under 0000:86:00.0: cvl_0_0 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:02.518 Found net devices under 0000:86:00.1: cvl_0_1 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.518 19:10:23 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.519 19:10:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:02.519 19:10:23 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:02.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:33:02.519 00:33:02.519 --- 10.0.0.2 ping statistics --- 00:33:02.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.519 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:33:02.519 19:10:23 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:33:02.519 00:33:02.519 --- 10.0.0.1 ping statistics --- 00:33:02.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.519 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:33:02.519 19:10:23 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.519 19:10:23 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:02.519 19:10:23 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:02.519 19:10:23 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:04.426 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:04.426 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:04.426 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:04.685 19:10:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:04.685 19:10:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3899450 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3899450 00:33:04.685 19:10:26 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3899450 ']' 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.685 19:10:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 [2024-11-20 19:10:26.949988] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:33:04.685 [2024-11-20 19:10:26.950033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.944 [2024-11-20 19:10:27.031160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.944 [2024-11-20 19:10:27.071563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.944 [2024-11-20 19:10:27.071598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.944 [2024-11-20 19:10:27.071605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.944 [2024-11-20 19:10:27.071615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.944 [2024-11-20 19:10:27.071620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.944 [2024-11-20 19:10:27.072186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.944 19:10:27 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.944 19:10:27 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:04.944 19:10:27 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.945 19:10:27 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.945 19:10:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:04.945 19:10:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.945 [2024-11-20 19:10:27.203063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.945 19:10:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.945 19:10:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.945 ************************************ 00:33:04.945 START TEST fio_dif_1_default 00:33:04.945 ************************************ 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:04.945 bdev_null0 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.945 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:05.214 [2024-11-20 19:10:27.283417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:05.214 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:05.215 { 00:33:05.215 "params": { 00:33:05.215 "name": "Nvme$subsystem", 00:33:05.215 "trtype": "$TEST_TRANSPORT", 00:33:05.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.215 "adrfam": "ipv4", 00:33:05.215 "trsvcid": "$NVMF_PORT", 00:33:05.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.215 "hdgst": ${hdgst:-false}, 00:33:05.215 "ddgst": ${ddgst:-false} 00:33:05.215 }, 00:33:05.215 "method": "bdev_nvme_attach_controller" 00:33:05.215 } 00:33:05.215 EOF 00:33:05.215 )") 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:05.215 "params": { 00:33:05.215 "name": "Nvme0", 00:33:05.215 "trtype": "tcp", 00:33:05.215 "traddr": "10.0.0.2", 00:33:05.215 "adrfam": "ipv4", 00:33:05.215 "trsvcid": "4420", 00:33:05.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:05.215 "hdgst": false, 00:33:05.215 "ddgst": false 00:33:05.215 }, 00:33:05.215 "method": "bdev_nvme_attach_controller" 00:33:05.215 }' 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:05.215 19:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.507 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:05.507 fio-3.35 00:33:05.507 Starting 1 thread 00:33:17.765 00:33:17.765 filename0: (groupid=0, jobs=1): err= 0: pid=3899823: Wed Nov 20 19:10:38 2024 00:33:17.765 read: IOPS=207, BW=829KiB/s (849kB/s)(8320KiB/10036msec) 00:33:17.765 slat (nsec): min=5795, max=33915, avg=6329.39, stdev=922.28 00:33:17.765 clat (usec): min=341, max=46009, avg=19282.38, stdev=20339.51 00:33:17.765 lat (usec): min=348, max=46043, avg=19288.71, stdev=20339.47 00:33:17.765 clat percentiles (usec): 00:33:17.765 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 375], 00:33:17.765 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 424], 60.00th=[40633], 00:33:17.765 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:17.765 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:33:17.765 | 99.99th=[45876] 00:33:17.765 bw ( KiB/s): min= 734, max= 960, per=100.00%, avg=830.30, stdev=73.90, samples=20 00:33:17.765 iops : min= 183, max= 240, avg=207.55, stdev=18.51, samples=20 00:33:17.765 lat (usec) : 500=53.61%, 750=0.05% 00:33:17.765 lat (msec) : 50=46.35% 00:33:17.765 cpu : usr=92.60%, sys=7.00%, ctx=69, majf=0, minf=0 00:33:17.765 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.765 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.765 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:17.765 00:33:17.765 Run status group 0 (all jobs): 00:33:17.765 READ: bw=829KiB/s (849kB/s), 829KiB/s-829KiB/s (849kB/s-849kB/s), io=8320KiB (8520kB), run=10036-10036msec 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.765 00:33:17.765 real 0m11.129s 00:33:17.765 user 0m16.041s 00:33:17.765 sys 0m0.986s 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 ************************************ 00:33:17.765 END TEST fio_dif_1_default 00:33:17.765 ************************************ 00:33:17.765 19:10:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:17.765 19:10:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:17.765 19:10:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.765 19:10:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 ************************************ 00:33:17.765 START TEST fio_dif_1_multi_subsystems 00:33:17.765 ************************************ 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 bdev_null0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.765 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 [2024-11-20 19:10:38.476608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 bdev_null1 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.766 { 00:33:17.766 "params": { 00:33:17.766 "name": "Nvme$subsystem", 00:33:17.766 "trtype": "$TEST_TRANSPORT", 00:33:17.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.766 "adrfam": "ipv4", 00:33:17.766 "trsvcid": "$NVMF_PORT", 00:33:17.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.766 "hdgst": ${hdgst:-false}, 00:33:17.766 "ddgst": ${ddgst:-false} 00:33:17.766 }, 00:33:17.766 "method": "bdev_nvme_attach_controller" 00:33:17.766 } 00:33:17.766 EOF 00:33:17.766 )") 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.766 { 00:33:17.766 "params": { 00:33:17.766 "name": "Nvme$subsystem", 00:33:17.766 "trtype": "$TEST_TRANSPORT", 00:33:17.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.766 "adrfam": "ipv4", 00:33:17.766 "trsvcid": "$NVMF_PORT", 00:33:17.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.766 "hdgst": ${hdgst:-false}, 00:33:17.766 "ddgst": ${ddgst:-false} 00:33:17.766 }, 00:33:17.766 "method": "bdev_nvme_attach_controller" 00:33:17.766 } 00:33:17.766 EOF 00:33:17.766 )") 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:17.766 "params": { 00:33:17.766 "name": "Nvme0", 00:33:17.766 "trtype": "tcp", 00:33:17.766 "traddr": "10.0.0.2", 00:33:17.766 "adrfam": "ipv4", 00:33:17.766 "trsvcid": "4420", 00:33:17.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.766 "hdgst": false, 00:33:17.766 "ddgst": false 00:33:17.766 }, 00:33:17.766 "method": "bdev_nvme_attach_controller" 00:33:17.766 },{ 00:33:17.766 "params": { 00:33:17.766 "name": "Nvme1", 00:33:17.766 "trtype": "tcp", 00:33:17.766 "traddr": "10.0.0.2", 00:33:17.766 "adrfam": "ipv4", 00:33:17.766 "trsvcid": "4420", 00:33:17.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.766 "hdgst": false, 00:33:17.766 "ddgst": false 00:33:17.766 }, 00:33:17.766 "method": "bdev_nvme_attach_controller" 00:33:17.766 }' 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:17.766 19:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.766 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:17.766 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:17.766 fio-3.35 00:33:17.766 Starting 2 threads 00:33:27.742 00:33:27.742 filename0: (groupid=0, jobs=1): err= 0: pid=3901789: Wed Nov 20 19:10:49 2024 00:33:27.742 read: IOPS=203, BW=814KiB/s (833kB/s)(8160KiB/10028msec) 00:33:27.742 slat (nsec): min=5907, max=49114, avg=6941.99, stdev=2017.16 00:33:27.742 clat (usec): min=371, max=42559, avg=19641.95, stdev=20440.34 00:33:27.742 lat (usec): min=377, max=42585, avg=19648.90, stdev=20439.83 00:33:27.742 clat percentiles (usec): 00:33:27.742 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 412], 00:33:27.742 | 30.00th=[ 420], 40.00th=[ 445], 50.00th=[ 611], 60.00th=[40633], 00:33:27.742 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:27.742 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:27.742 | 99.99th=[42730] 00:33:27.742 bw ( KiB/s): min= 768, max= 1024, per=67.57%, avg=814.40, stdev=80.07, samples=20 00:33:27.742 iops : min= 192, max= 256, avg=203.60, stdev=20.02, samples=20 00:33:27.742 lat (usec) : 500=42.25%, 750=10.49%, 1000=0.39% 00:33:27.742 lat (msec) : 50=46.86% 00:33:27.742 cpu : usr=96.66%, sys=3.09%, ctx=7, majf=0, minf=0 00:33:27.742 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.742 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.742 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:27.742 filename1: (groupid=0, jobs=1): err= 0: pid=3901790: Wed Nov 20 19:10:49 2024 00:33:27.742 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10010msec) 00:33:27.742 slat (nsec): min=5882, max=49398, avg=7577.39, stdev=2682.47 00:33:27.742 clat (usec): min=549, max=42012, avg=40832.71, stdev=2583.66 00:33:27.742 lat (usec): min=556, max=42023, avg=40840.28, stdev=2583.69 00:33:27.742 clat percentiles (usec): 00:33:27.742 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:27.742 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:27.742 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:27.742 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:27.742 | 99.99th=[42206] 00:33:27.742 bw ( KiB/s): min= 384, max= 416, per=32.38%, avg=390.40, stdev=13.13, samples=20 00:33:27.742 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:33:27.742 lat (usec) : 750=0.41% 00:33:27.742 lat (msec) : 50=99.59% 00:33:27.742 cpu : usr=96.78%, sys=2.97%, ctx=7, majf=0, minf=1 00:33:27.742 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.742 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.742 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:27.742 00:33:27.742 Run status group 0 (all jobs): 00:33:27.742 READ: bw=1205KiB/s (1234kB/s), 392KiB/s-814KiB/s (401kB/s-833kB/s), io=11.8MiB (12.4MB), run=10010-10028msec 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.742 00:33:27.742 real 0m11.447s 00:33:27.742 user 0m26.331s 00:33:27.742 sys 0m1.003s 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.742 19:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.742 ************************************ 00:33:27.742 END TEST fio_dif_1_multi_subsystems 00:33:27.742 ************************************ 00:33:27.742 19:10:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:27.742 19:10:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:27.742 19:10:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.742 19:10:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:27.742 ************************************ 00:33:27.742 START TEST fio_dif_rand_params 00:33:27.743 ************************************ 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.743 bdev_null0 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.743 19:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.743 [2024-11-20 19:10:50.000120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:27.743 { 00:33:27.743 "params": { 00:33:27.743 "name": "Nvme$subsystem", 00:33:27.743 "trtype": "$TEST_TRANSPORT", 00:33:27.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:27.743 "adrfam": "ipv4", 00:33:27.743 "trsvcid": "$NVMF_PORT", 00:33:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:27.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:27.743 "hdgst": ${hdgst:-false}, 00:33:27.743 "ddgst": ${ddgst:-false} 00:33:27.743 }, 00:33:27.743 "method": "bdev_nvme_attach_controller" 00:33:27.743 } 00:33:27.743 EOF 00:33:27.743 )") 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:27.743 "params": { 00:33:27.743 "name": "Nvme0", 00:33:27.743 "trtype": "tcp", 00:33:27.743 "traddr": "10.0.0.2", 00:33:27.743 "adrfam": "ipv4", 00:33:27.743 "trsvcid": "4420", 00:33:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:27.743 "hdgst": false, 00:33:27.743 "ddgst": false 00:33:27.743 }, 00:33:27.743 "method": "bdev_nvme_attach_controller" 00:33:27.743 }' 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:27.743 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:28.021 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:28.021 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:28.021 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:28.021 19:10:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:28.284 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:28.284 ... 00:33:28.284 fio-3.35 00:33:28.284 Starting 3 threads 00:33:34.849 00:33:34.849 filename0: (groupid=0, jobs=1): err= 0: pid=3903754: Wed Nov 20 19:10:56 2024 00:33:34.849 read: IOPS=307, BW=38.4MiB/s (40.3MB/s)(193MiB/5009msec) 00:33:34.849 slat (nsec): min=6109, max=44115, avg=10744.93, stdev=2127.19 00:33:34.849 clat (usec): min=4340, max=51489, avg=9744.71, stdev=6078.83 00:33:34.849 lat (usec): min=4346, max=51500, avg=9755.46, stdev=6078.71 00:33:34.849 clat percentiles (usec): 00:33:34.849 | 1.00th=[ 6521], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7963], 00:33:34.849 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:33:34.849 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[10945], 00:33:34.849 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:33:34.849 | 99.99th=[51643] 00:33:34.849 bw ( KiB/s): min=30208, max=45056, per=33.18%, avg=39347.20, stdev=5257.60, samples=10 00:33:34.849 iops : min= 236, max= 352, avg=307.40, stdev=41.07, samples=10 00:33:34.849 lat (msec) : 10=85.06%, 20=12.60%, 50=2.01%, 100=0.32% 00:33:34.849 cpu : usr=94.45%, sys=5.27%, ctx=13, majf=0, minf=79 00:33:34.849 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.849 issued rwts: total=1540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.849 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.849 filename0: (groupid=0, jobs=1): err= 0: pid=3903755: Wed Nov 20 19:10:56 2024 00:33:34.849 read: IOPS=322, BW=40.4MiB/s (42.3MB/s)(204MiB/5044msec) 00:33:34.849 slat (nsec): min=6120, max=26041, avg=10936.94, stdev=1915.56 00:33:34.849 clat (usec): min=2965, max=47404, avg=9250.27, stdev=2742.05 00:33:34.849 lat (usec): min=2971, max=47430, avg=9261.21, stdev=2742.69 00:33:34.849 clat percentiles (usec): 00:33:34.849 | 1.00th=[ 3621], 5.00th=[ 5669], 10.00th=[ 6587], 20.00th=[ 8029], 00:33:34.849 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:34.849 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[11600], 00:33:34.849 | 99.00th=[12518], 99.50th=[13042], 99.90th=[47449], 99.95th=[47449], 00:33:34.849 | 99.99th=[47449] 00:33:34.849 bw ( KiB/s): min=37888, max=47104, per=35.12%, avg=41651.20, stdev=3176.99, samples=10 00:33:34.849 iops : min= 296, max= 368, avg=325.40, stdev=24.82, samples=10 00:33:34.849 lat (msec) : 4=3.19%, 10=63.04%, 20=33.46%, 50=0.31% 00:33:34.849 cpu : usr=93.89%, sys=5.81%, ctx=9, majf=0, minf=27 00:33:34.849 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.849 issued rwts: total=1629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.849 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.849 filename0: (groupid=0, jobs=1): err= 0: pid=3903756: Wed Nov 20 19:10:56 2024 00:33:34.849 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(188MiB/5043msec) 00:33:34.849 slat (nsec): min=6108, max=26557, avg=10963.45, stdev=1832.55 00:33:34.849 clat (usec): min=3780, max=50955, avg=10019.02, stdev=4081.09 00:33:34.849 lat (usec): min=3786, max=50962, avg=10029.99, stdev=4081.22 00:33:34.849 clat percentiles (usec): 00:33:34.849 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 8586], 00:33:34.849 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:33:34.849 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:33:34.849 | 99.00th=[13566], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:33:34.849 | 99.99th=[51119] 00:33:34.849 bw ( KiB/s): min=36096, max=42240, per=32.42%, avg=38443.60, stdev=1713.02, samples=10 00:33:34.849 iops : min= 282, max= 330, avg=300.30, stdev=13.40, samples=10 00:33:34.849 lat (msec) : 4=0.20%, 10=54.99%, 20=43.88%, 50=0.66%, 100=0.27% 00:33:34.849 cpu : usr=94.47%, sys=5.26%, ctx=10, majf=0, minf=47 00:33:34.849 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.849 issued rwts: total=1504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.849 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.849 00:33:34.849 Run status group 0 (all jobs): 00:33:34.849 READ: bw=116MiB/s (121MB/s), 37.3MiB/s-40.4MiB/s (39.1MB/s-42.3MB/s), io=584MiB (612MB), run=5009-5044msec 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 bdev_null0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 [2024-11-20 19:10:56.412573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 bdev_null1 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.849 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.850 bdev_null2 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.850 { 00:33:34.850 "params": { 00:33:34.850 "name": "Nvme$subsystem", 00:33:34.850 "trtype": "$TEST_TRANSPORT", 00:33:34.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.850 "adrfam": "ipv4", 00:33:34.850 "trsvcid": "$NVMF_PORT", 00:33:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.850 "hdgst": ${hdgst:-false}, 00:33:34.850 "ddgst": ${ddgst:-false} 00:33:34.850 }, 00:33:34.850 "method": "bdev_nvme_attach_controller" 00:33:34.850 } 00:33:34.850 EOF 00:33:34.850 )") 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.850 { 00:33:34.850 "params": { 00:33:34.850 "name": "Nvme$subsystem", 00:33:34.850 "trtype": "$TEST_TRANSPORT", 00:33:34.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.850 "adrfam": "ipv4", 00:33:34.850 "trsvcid": "$NVMF_PORT", 00:33:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.850 "hdgst": ${hdgst:-false}, 00:33:34.850 "ddgst": ${ddgst:-false} 00:33:34.850 }, 00:33:34.850 "method": "bdev_nvme_attach_controller" 00:33:34.850 } 00:33:34.850 EOF 00:33:34.850 )") 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.850 { 00:33:34.850 "params": { 00:33:34.850 "name": "Nvme$subsystem", 00:33:34.850 "trtype": "$TEST_TRANSPORT", 00:33:34.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.850 "adrfam": "ipv4", 00:33:34.850 "trsvcid": "$NVMF_PORT", 00:33:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.850 "hdgst": ${hdgst:-false}, 00:33:34.850 "ddgst": ${ddgst:-false} 00:33:34.850 }, 00:33:34.850 "method": "bdev_nvme_attach_controller" 00:33:34.850 } 00:33:34.850 EOF 00:33:34.850 )") 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:34.850 "params": { 00:33:34.850 "name": "Nvme0", 00:33:34.850 "trtype": "tcp", 00:33:34.850 "traddr": "10.0.0.2", 00:33:34.850 "adrfam": "ipv4", 00:33:34.850 "trsvcid": "4420", 00:33:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.850 "hdgst": false, 00:33:34.850 "ddgst": false 00:33:34.850 }, 00:33:34.850 "method": "bdev_nvme_attach_controller" 00:33:34.850 },{ 00:33:34.850 "params": { 00:33:34.850 "name": "Nvme1", 00:33:34.850 "trtype": "tcp", 00:33:34.850 "traddr": "10.0.0.2", 00:33:34.850 "adrfam": "ipv4", 00:33:34.850 "trsvcid": "4420", 00:33:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:34.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:34.850 "hdgst": false, 00:33:34.850 "ddgst": false 00:33:34.850 }, 00:33:34.850 "method": "bdev_nvme_attach_controller" 00:33:34.850 },{ 00:33:34.850 "params": { 00:33:34.850 "name": "Nvme2", 00:33:34.850 "trtype": "tcp", 00:33:34.850 "traddr": "10.0.0.2", 00:33:34.850 "adrfam": "ipv4", 00:33:34.850 "trsvcid": "4420", 00:33:34.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:34.850 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:34.850 "hdgst": false, 00:33:34.850 "ddgst": false 00:33:34.850 }, 00:33:34.850 "method": "bdev_nvme_attach_controller" 00:33:34.850 }' 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:34.850 19:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.850 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:34.850 ... 00:33:34.850 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:34.850 ... 00:33:34.850 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:34.850 ... 00:33:34.850 fio-3.35 00:33:34.850 Starting 24 threads 00:33:47.060 00:33:47.060 filename0: (groupid=0, jobs=1): err= 0: pid=3904810: Wed Nov 20 19:11:07 2024 00:33:47.060 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:33:47.061 slat (usec): min=10, max=107, avg=47.58, stdev=19.04 00:33:47.061 clat (usec): min=12866, max=30693, avg=25904.94, stdev=2055.30 00:33:47.061 lat (usec): min=12921, max=30728, avg=25952.53, stdev=2057.38 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.061 | 70.00th=[26608], 80.00th=[27395], 90.00th=[29230], 95.00th=[30016], 00:33:47.061 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:33:47.061 | 99.99th=[30802] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2438.47, stdev=144.54, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=609.58, stdev=36.14, samples=19 00:33:47.061 lat (msec) : 20=0.79%, 50=99.21% 00:33:47.061 cpu : usr=98.06%, sys=1.29%, ctx=98, majf=0, minf=9 00:33:47.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename0: (groupid=0, jobs=1): err= 0: pid=3904811: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=607, BW=2431KiB/s (2490kB/s)(23.8MiB/10003msec) 00:33:47.061 slat (nsec): min=5202, max=74639, avg=30234.61, stdev=14491.75 00:33:47.061 clat (usec): min=9109, max=38881, avg=26072.46, stdev=2207.01 00:33:47.061 lat (usec): min=9118, max=38898, avg=26102.70, stdev=2207.95 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22938], 5.00th=[23462], 10.00th=[24249], 20.00th=[24773], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26346], 00:33:47.061 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29492], 95.00th=[30016], 00:33:47.061 | 99.00th=[30540], 99.50th=[30540], 99.90th=[39060], 99.95th=[39060], 00:33:47.061 | 99.99th=[39060] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2425.21, stdev=144.58, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=606.26, stdev=36.18, samples=19 00:33:47.061 lat (msec) : 10=0.03%, 20=0.49%, 50=99.47% 00:33:47.061 cpu : usr=98.27%, sys=1.15%, ctx=55, majf=0, minf=9 00:33:47.061 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename0: (groupid=0, jobs=1): err= 0: pid=3904812: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=607, BW=2432KiB/s (2490kB/s)(23.8MiB/10002msec) 00:33:47.061 slat (usec): min=6, max=116, avg=33.82, stdev=17.54 00:33:47.061 clat (usec): min=8318, max=48363, avg=26036.92, stdev=2455.49 00:33:47.061 lat (usec): min=8325, max=48412, avg=26070.75, stdev=2455.22 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:33:47.061 | 70.00th=[26608], 80.00th=[27395], 90.00th=[29492], 95.00th=[30016], 00:33:47.061 | 99.00th=[30278], 99.50th=[30540], 99.90th=[47973], 99.95th=[48497], 00:33:47.061 | 99.99th=[48497] 00:33:47.061 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2425.16, stdev=131.88, samples=19 00:33:47.061 iops : min= 542, max= 640, avg=606.21, stdev=33.05, samples=19 00:33:47.061 lat (msec) : 10=0.26%, 20=0.26%, 50=99.47% 00:33:47.061 cpu : usr=98.83%, sys=0.77%, ctx=27, majf=0, minf=9 00:33:47.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename0: (groupid=0, jobs=1): err= 0: pid=3904813: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=608, BW=2434KiB/s (2492kB/s)(23.8MiB/10002msec) 00:33:47.061 slat (usec): min=6, max=110, avg=49.09, stdev=18.20 00:33:47.061 clat (usec): min=9844, max=48993, avg=25856.50, stdev=2624.25 00:33:47.061 lat (usec): min=9887, max=49011, avg=25905.60, stdev=2628.04 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23987], 20.00th=[24511], 00:33:47.061 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:33:47.061 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[29754], 00:33:47.061 | 99.00th=[30278], 99.50th=[30540], 99.90th=[49021], 99.95th=[49021], 00:33:47.061 | 99.99th=[49021] 00:33:47.061 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2427.47, stdev=132.36, samples=19 00:33:47.061 iops : min= 542, max= 640, avg=606.79, stdev=33.16, samples=19 00:33:47.061 lat (msec) : 10=0.07%, 20=1.02%, 50=98.92% 00:33:47.061 cpu : usr=97.74%, sys=1.45%, ctx=161, majf=0, minf=9 00:33:47.061 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename0: (groupid=0, jobs=1): err= 0: pid=3904814: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=607, BW=2430KiB/s (2488kB/s)(23.7MiB/10002msec) 00:33:47.061 slat (usec): min=7, max=139, avg=46.01, stdev=22.74 00:33:47.061 clat (usec): min=9845, max=48772, avg=25926.96, stdev=2561.47 00:33:47.061 lat (usec): min=9860, max=48799, avg=25972.98, stdev=2565.34 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23987], 20.00th=[24511], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.061 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[29754], 00:33:47.061 | 99.00th=[30278], 99.50th=[35914], 99.90th=[48497], 99.95th=[48497], 00:33:47.061 | 99.99th=[49021] 00:33:47.061 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2423.26, stdev=130.56, samples=19 00:33:47.061 iops : min= 542, max= 640, avg=605.74, stdev=32.72, samples=19 00:33:47.061 lat (msec) : 10=0.08%, 20=0.64%, 50=99.28% 00:33:47.061 cpu : usr=98.39%, sys=1.01%, ctx=124, majf=0, minf=9 00:33:47.061 IO depths : 1=5.1%, 2=10.4%, 4=21.1%, 8=55.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=93.3%, 8=1.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename0: (groupid=0, jobs=1): err= 0: pid=3904815: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=609, BW=2439KiB/s (2497kB/s)(23.8MiB/10010msec) 00:33:47.061 slat (usec): min=7, max=296, avg=55.09, stdev=19.13 00:33:47.061 clat (usec): min=7187, max=30673, avg=25746.63, stdev=2158.30 00:33:47.061 lat (usec): min=7195, max=30724, avg=25801.72, stdev=2163.43 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23987], 20.00th=[24249], 00:33:47.061 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:33:47.061 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[29754], 00:33:47.061 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:33:47.061 | 99.99th=[30802] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2441.42, stdev=147.69, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=610.32, stdev=36.93, samples=19 00:33:47.061 lat (msec) : 10=0.10%, 20=0.79%, 50=99.12% 00:33:47.061 cpu : usr=98.76%, sys=0.80%, ctx=35, majf=0, minf=9 00:33:47.061 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename0: (groupid=0, jobs=1): err= 0: pid=3904816: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=607, BW=2431KiB/s (2490kB/s)(23.8MiB/10003msec) 00:33:47.061 slat (nsec): min=5813, max=77206, avg=30769.87, stdev=17420.11 00:33:47.061 clat (usec): min=9054, max=38734, avg=26022.89, stdev=2177.98 00:33:47.061 lat (usec): min=9063, max=38754, avg=26053.66, stdev=2180.20 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22938], 5.00th=[23462], 10.00th=[24249], 20.00th=[24511], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:33:47.061 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29492], 95.00th=[30016], 00:33:47.061 | 99.00th=[30278], 99.50th=[30540], 99.90th=[38536], 99.95th=[38536], 00:33:47.061 | 99.99th=[38536] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2425.21, stdev=143.06, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=606.26, stdev=35.80, samples=19 00:33:47.061 lat (msec) : 10=0.03%, 20=0.56%, 50=99.41% 00:33:47.061 cpu : usr=98.74%, sys=0.87%, ctx=34, majf=0, minf=9 00:33:47.061 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename0: (groupid=0, jobs=1): err= 0: pid=3904817: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:33:47.061 slat (usec): min=6, max=134, avg=57.45, stdev=17.67 00:33:47.061 clat (usec): min=10805, max=30695, avg=25772.87, stdev=2060.88 00:33:47.061 lat (usec): min=10820, max=30753, avg=25830.32, stdev=2065.40 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23987], 20.00th=[24249], 00:33:47.061 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[26084], 00:33:47.061 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[29754], 00:33:47.061 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:33:47.061 | 99.99th=[30802] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2438.47, stdev=144.54, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=609.58, stdev=36.14, samples=19 00:33:47.061 lat (msec) : 20=0.75%, 50=99.25% 00:33:47.061 cpu : usr=99.02%, sys=0.59%, ctx=23, majf=0, minf=9 00:33:47.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename1: (groupid=0, jobs=1): err= 0: pid=3904819: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=606, BW=2425KiB/s (2484kB/s)(23.7MiB/10001msec) 00:33:47.061 slat (nsec): min=3586, max=76161, avg=32139.70, stdev=17148.95 00:33:47.061 clat (usec): min=12618, max=56143, avg=26100.94, stdev=2131.77 00:33:47.061 lat (usec): min=12627, max=56155, avg=26133.08, stdev=2129.60 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26346], 00:33:47.061 | 70.00th=[26608], 80.00th=[27395], 90.00th=[29492], 95.00th=[30016], 00:33:47.061 | 99.00th=[30278], 99.50th=[30540], 99.90th=[42206], 99.95th=[42206], 00:33:47.061 | 99.99th=[56361] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2424.68, stdev=137.75, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=606.11, stdev=34.41, samples=19 00:33:47.061 lat (msec) : 20=0.03%, 50=99.93%, 100=0.03% 00:33:47.061 cpu : usr=98.28%, sys=1.16%, ctx=60, majf=0, minf=9 00:33:47.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename1: (groupid=0, jobs=1): err= 0: pid=3904820: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:33:47.061 slat (usec): min=12, max=109, avg=49.82, stdev=17.66 00:33:47.061 clat (usec): min=13003, max=30665, avg=25882.54, stdev=2061.03 00:33:47.061 lat (usec): min=13047, max=30689, avg=25932.36, stdev=2062.63 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.061 | 70.00th=[26608], 80.00th=[27395], 90.00th=[29230], 95.00th=[30016], 00:33:47.061 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:33:47.061 | 99.99th=[30540] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2438.47, stdev=144.54, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=609.58, stdev=36.14, samples=19 00:33:47.061 lat (msec) : 20=0.79%, 50=99.21% 00:33:47.061 cpu : usr=98.63%, sys=1.00%, ctx=12, majf=0, minf=9 00:33:47.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename1: (groupid=0, jobs=1): err= 0: pid=3904821: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=607, BW=2430KiB/s (2489kB/s)(23.8MiB/10007msec) 00:33:47.061 slat (usec): min=6, max=138, avg=55.36, stdev=18.24 00:33:47.061 clat (usec): min=13386, max=33210, avg=25845.63, stdev=1985.60 00:33:47.061 lat (usec): min=13400, max=33229, avg=25900.99, stdev=1988.67 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:33:47.061 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:33:47.061 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[29754], 00:33:47.061 | 99.00th=[30278], 99.50th=[30278], 99.90th=[33162], 99.95th=[33162], 00:33:47.061 | 99.99th=[33162] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2432.00, stdev=134.92, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=608.00, stdev=33.73, samples=19 00:33:47.061 lat (msec) : 20=0.26%, 50=99.74% 00:33:47.061 cpu : usr=98.73%, sys=0.82%, ctx=32, majf=0, minf=9 00:33:47.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename1: (groupid=0, jobs=1): err= 0: pid=3904822: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=607, BW=2428KiB/s (2487kB/s)(23.7MiB/10012msec) 00:33:47.061 slat (nsec): min=6985, max=74465, avg=29994.09, stdev=14570.16 00:33:47.061 clat (usec): min=12532, max=52847, avg=26109.76, stdev=2325.83 00:33:47.061 lat (usec): min=12591, max=52870, avg=26139.75, stdev=2326.35 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22938], 5.00th=[23462], 10.00th=[24249], 20.00th=[24773], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26346], 00:33:47.061 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29754], 95.00th=[30278], 00:33:47.061 | 99.00th=[30540], 99.50th=[30540], 99.90th=[45876], 99.95th=[45876], 00:33:47.061 | 99.99th=[52691] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2425.47, stdev=137.68, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=606.37, stdev=34.42, samples=19 00:33:47.061 lat (msec) : 20=0.49%, 50=99.47%, 100=0.03% 00:33:47.061 cpu : usr=98.35%, sys=1.13%, ctx=59, majf=0, minf=9 00:33:47.061 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename1: (groupid=0, jobs=1): err= 0: pid=3904823: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=606, BW=2425KiB/s (2484kB/s)(23.7MiB/10001msec) 00:33:47.061 slat (nsec): min=7331, max=77193, avg=27367.76, stdev=17365.67 00:33:47.061 clat (usec): min=19387, max=41584, avg=26137.61, stdev=2020.29 00:33:47.061 lat (usec): min=19398, max=41612, avg=26164.98, stdev=2023.50 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22938], 5.00th=[23462], 10.00th=[24249], 20.00th=[24773], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26346], 00:33:47.061 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29492], 95.00th=[30016], 00:33:47.061 | 99.00th=[30278], 99.50th=[30540], 99.90th=[41681], 99.95th=[41681], 00:33:47.061 | 99.99th=[41681] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2425.26, stdev=144.52, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=606.32, stdev=36.13, samples=19 00:33:47.061 lat (msec) : 20=0.10%, 50=99.90% 00:33:47.061 cpu : usr=98.68%, sys=0.95%, ctx=17, majf=0, minf=9 00:33:47.061 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename1: (groupid=0, jobs=1): err= 0: pid=3904824: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:33:47.061 slat (usec): min=7, max=106, avg=29.02, stdev=18.44 00:33:47.061 clat (usec): min=12869, max=30724, avg=26065.38, stdev=2071.52 00:33:47.061 lat (usec): min=12923, max=30751, avg=26094.40, stdev=2070.78 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22676], 5.00th=[23462], 10.00th=[24249], 20.00th=[24773], 00:33:47.061 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:33:47.061 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29492], 95.00th=[30278], 00:33:47.061 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30540], 99.95th=[30802], 00:33:47.061 | 99.99th=[30802] 00:33:47.061 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2438.47, stdev=144.54, samples=19 00:33:47.061 iops : min= 544, max= 672, avg=609.58, stdev=36.14, samples=19 00:33:47.061 lat (msec) : 20=0.75%, 50=99.25% 00:33:47.061 cpu : usr=97.92%, sys=1.28%, ctx=131, majf=0, minf=10 00:33:47.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.061 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.061 filename1: (groupid=0, jobs=1): err= 0: pid=3904825: Wed Nov 20 19:11:07 2024 00:33:47.061 read: IOPS=606, BW=2426KiB/s (2484kB/s)(23.7MiB/10009msec) 00:33:47.061 slat (usec): min=3, max=108, avg=41.15, stdev=21.54 00:33:47.061 clat (usec): min=10752, max=48651, avg=26036.53, stdev=2606.85 00:33:47.061 lat (usec): min=10759, max=48698, avg=26077.67, stdev=2608.58 00:33:47.061 clat percentiles (usec): 00:33:47.061 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.061 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.061 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29492], 95.00th=[30016], 00:33:47.062 | 99.00th=[30540], 99.50th=[40109], 99.90th=[48497], 99.95th=[48497], 00:33:47.062 | 99.99th=[48497] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2427.53, stdev=142.63, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=606.84, stdev=35.65, samples=19 00:33:47.062 lat (msec) : 20=0.59%, 50=99.41% 00:33:47.062 cpu : usr=98.17%, sys=1.24%, ctx=63, majf=0, minf=9 00:33:47.062 IO depths : 1=4.9%, 2=9.8%, 4=20.0%, 8=56.4%, 16=8.8%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=93.1%, 8=2.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename1: (groupid=0, jobs=1): err= 0: pid=3904826: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:33:47.062 slat (usec): min=12, max=139, avg=51.92, stdev=20.48 00:33:47.062 clat (usec): min=10756, max=30645, avg=25845.13, stdev=2032.46 00:33:47.062 lat (usec): min=10776, max=30711, avg=25897.04, stdev=2036.63 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.062 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.062 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[29754], 00:33:47.062 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:33:47.062 | 99.99th=[30540] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2438.47, stdev=144.54, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=609.58, stdev=36.14, samples=19 00:33:47.062 lat (msec) : 20=0.75%, 50=99.25% 00:33:47.062 cpu : usr=98.47%, sys=1.03%, ctx=39, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904827: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=607, BW=2431KiB/s (2490kB/s)(23.8MiB/10003msec) 00:33:47.062 slat (usec): min=6, max=104, avg=53.42, stdev=15.40 00:33:47.062 clat (usec): min=8503, max=43773, avg=25863.00, stdev=2312.79 00:33:47.062 lat (usec): min=8513, max=43789, avg=25916.43, stdev=2314.38 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.062 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:33:47.062 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[30016], 00:33:47.062 | 99.00th=[30278], 99.50th=[30540], 99.90th=[43779], 99.95th=[43779], 00:33:47.062 | 99.99th=[43779] 00:33:47.062 bw ( KiB/s): min= 2171, max= 2688, per=4.16%, avg=2424.95, stdev=138.59, samples=19 00:33:47.062 iops : min= 542, max= 672, avg=606.16, stdev=34.72, samples=19 00:33:47.062 lat (msec) : 10=0.03%, 20=0.49%, 50=99.47% 00:33:47.062 cpu : usr=98.41%, sys=1.20%, ctx=40, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904829: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=607, BW=2431KiB/s (2489kB/s)(23.8MiB/10006msec) 00:33:47.062 slat (nsec): min=6559, max=74997, avg=30406.19, stdev=17824.90 00:33:47.062 clat (usec): min=13902, max=33766, avg=26051.30, stdev=2004.78 00:33:47.062 lat (usec): min=13915, max=33788, avg=26081.70, stdev=2005.02 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:33:47.062 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26346], 00:33:47.062 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29492], 95.00th=[30016], 00:33:47.062 | 99.00th=[30278], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:33:47.062 | 99.99th=[33817] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2431.74, stdev=135.19, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=607.89, stdev=33.84, samples=19 00:33:47.062 lat (msec) : 20=0.26%, 50=99.74% 00:33:47.062 cpu : usr=98.85%, sys=0.80%, ctx=15, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904830: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=607, BW=2430KiB/s (2488kB/s)(23.8MiB/10010msec) 00:33:47.062 slat (usec): min=7, max=127, avg=38.04, stdev=26.59 00:33:47.062 clat (usec): min=16132, max=32914, avg=26034.24, stdev=1918.75 00:33:47.062 lat (usec): min=16179, max=32956, avg=26072.28, stdev=1925.41 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[23200], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:33:47.062 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.062 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29230], 95.00th=[30016], 00:33:47.062 | 99.00th=[30540], 99.50th=[30540], 99.90th=[32637], 99.95th=[32900], 00:33:47.062 | 99.99th=[32900] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2431.74, stdev=141.51, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=607.89, stdev=35.38, samples=19 00:33:47.062 lat (msec) : 20=0.26%, 50=99.74% 00:33:47.062 cpu : usr=98.60%, sys=0.91%, ctx=45, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904831: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:33:47.062 slat (usec): min=10, max=106, avg=48.81, stdev=18.40 00:33:47.062 clat (usec): min=10779, max=30698, avg=25895.02, stdev=2057.94 00:33:47.062 lat (usec): min=10804, max=30735, avg=25943.83, stdev=2059.83 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.062 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.062 | 70.00th=[26608], 80.00th=[27395], 90.00th=[29230], 95.00th=[30016], 00:33:47.062 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:33:47.062 | 99.99th=[30802] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2438.47, stdev=144.54, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=609.58, stdev=36.14, samples=19 00:33:47.062 lat (msec) : 20=0.75%, 50=99.25% 00:33:47.062 cpu : usr=98.36%, sys=1.11%, ctx=100, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904832: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=606, BW=2425KiB/s (2483kB/s)(23.7MiB/10003msec) 00:33:47.062 slat (usec): min=5, max=108, avg=53.89, stdev=15.39 00:33:47.062 clat (usec): min=22248, max=43198, avg=25937.20, stdev=2075.44 00:33:47.062 lat (usec): min=22295, max=43215, avg=25991.09, stdev=2075.96 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:33:47.062 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.062 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[30016], 00:33:47.062 | 99.00th=[30278], 99.50th=[30540], 99.90th=[43254], 99.95th=[43254], 00:33:47.062 | 99.99th=[43254] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2425.26, stdev=138.08, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=606.32, stdev=34.52, samples=19 00:33:47.062 lat (msec) : 50=100.00% 00:33:47.062 cpu : usr=98.74%, sys=0.85%, ctx=43, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904833: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:33:47.062 slat (usec): min=8, max=106, avg=36.45, stdev=19.51 00:33:47.062 clat (usec): min=10720, max=30734, avg=26012.83, stdev=2069.17 00:33:47.062 lat (usec): min=10742, max=30750, avg=26049.28, stdev=2069.49 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[22676], 5.00th=[23462], 10.00th=[24249], 20.00th=[24773], 00:33:47.062 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:33:47.062 | 70.00th=[26608], 80.00th=[27395], 90.00th=[29492], 95.00th=[30278], 00:33:47.062 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30540], 99.95th=[30802], 00:33:47.062 | 99.99th=[30802] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2438.47, stdev=144.54, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=609.58, stdev=36.14, samples=19 00:33:47.062 lat (msec) : 20=0.75%, 50=99.25% 00:33:47.062 cpu : usr=97.53%, sys=1.50%, ctx=198, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904834: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10005msec) 00:33:47.062 slat (usec): min=5, max=108, avg=42.91, stdev=23.54 00:33:47.062 clat (usec): min=9931, max=48678, avg=25918.90, stdev=2297.62 00:33:47.062 lat (usec): min=9975, max=48695, avg=25961.81, stdev=2302.00 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[21103], 5.00th=[23462], 10.00th=[24249], 20.00th=[24511], 00:33:47.062 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:33:47.062 | 70.00th=[26346], 80.00th=[27657], 90.00th=[29230], 95.00th=[29754], 00:33:47.062 | 99.00th=[30278], 99.50th=[30540], 99.90th=[39060], 99.95th=[39060], 00:33:47.062 | 99.99th=[48497] 00:33:47.062 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2430.47, stdev=138.62, samples=19 00:33:47.062 iops : min= 544, max= 672, avg=607.58, stdev=34.66, samples=19 00:33:47.062 lat (msec) : 10=0.03%, 20=0.95%, 50=99.02% 00:33:47.062 cpu : usr=98.40%, sys=0.97%, ctx=88, majf=0, minf=9 00:33:47.062 IO depths : 1=4.0%, 2=8.2%, 4=16.8%, 8=60.5%, 16=10.5%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=92.5%, 8=3.7%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 filename2: (groupid=0, jobs=1): err= 0: pid=3904835: Wed Nov 20 19:11:07 2024 00:33:47.062 read: IOPS=607, BW=2432KiB/s (2490kB/s)(23.8MiB/10002msec) 00:33:47.062 slat (usec): min=6, max=152, avg=53.85, stdev=20.27 00:33:47.062 clat (usec): min=9846, max=43731, avg=25809.74, stdev=2303.11 00:33:47.062 lat (usec): min=9865, max=43749, avg=25863.59, stdev=2305.90 00:33:47.062 clat percentiles (usec): 00:33:47.062 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:33:47.062 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:33:47.062 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[29754], 00:33:47.062 | 99.00th=[30278], 99.50th=[30278], 99.90th=[43779], 99.95th=[43779], 00:33:47.062 | 99.99th=[43779] 00:33:47.062 bw ( KiB/s): min= 2171, max= 2688, per=4.16%, avg=2424.95, stdev=138.59, samples=19 00:33:47.062 iops : min= 542, max= 672, avg=606.16, stdev=34.72, samples=19 00:33:47.062 lat (msec) : 10=0.10%, 20=0.43%, 50=99.47% 00:33:47.062 cpu : usr=98.71%, sys=0.90%, ctx=33, majf=0, minf=9 00:33:47.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.062 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:47.062 00:33:47.062 Run status group 0 (all jobs): 00:33:47.062 READ: bw=57.0MiB/s (59.7MB/s), 2425KiB/s-2439KiB/s (2483kB/s-2497kB/s), io=570MiB (598MB), run=10001-10012msec 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 bdev_null0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 [2024-11-20 19:11:08.172154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.062 bdev_null1 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:47.062 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:47.063 { 00:33:47.063 "params": { 00:33:47.063 "name": "Nvme$subsystem", 00:33:47.063 "trtype": "$TEST_TRANSPORT", 00:33:47.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.063 "adrfam": "ipv4", 00:33:47.063 "trsvcid": "$NVMF_PORT", 00:33:47.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.063 "hdgst": ${hdgst:-false}, 00:33:47.063 "ddgst": ${ddgst:-false} 00:33:47.063 }, 00:33:47.063 "method": "bdev_nvme_attach_controller" 00:33:47.063 } 00:33:47.063 EOF 00:33:47.063 )") 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:47.063 { 00:33:47.063 "params": { 00:33:47.063 "name": "Nvme$subsystem", 00:33:47.063 "trtype": "$TEST_TRANSPORT", 00:33:47.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.063 "adrfam": "ipv4", 00:33:47.063 "trsvcid": "$NVMF_PORT", 00:33:47.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.063 "hdgst": ${hdgst:-false}, 00:33:47.063 "ddgst": ${ddgst:-false} 00:33:47.063 }, 00:33:47.063 "method": "bdev_nvme_attach_controller" 00:33:47.063 } 00:33:47.063 EOF 00:33:47.063 )") 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:47.063 "params": { 00:33:47.063 "name": "Nvme0", 00:33:47.063 "trtype": "tcp", 00:33:47.063 "traddr": "10.0.0.2", 00:33:47.063 "adrfam": "ipv4", 00:33:47.063 "trsvcid": "4420", 00:33:47.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:47.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:47.063 "hdgst": false, 00:33:47.063 "ddgst": false 00:33:47.063 }, 00:33:47.063 "method": "bdev_nvme_attach_controller" 00:33:47.063 },{ 00:33:47.063 "params": { 00:33:47.063 "name": "Nvme1", 00:33:47.063 "trtype": "tcp", 00:33:47.063 "traddr": "10.0.0.2", 00:33:47.063 "adrfam": "ipv4", 00:33:47.063 "trsvcid": "4420", 00:33:47.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:47.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:47.063 "hdgst": false, 00:33:47.063 "ddgst": false 00:33:47.063 }, 00:33:47.063 "method": "bdev_nvme_attach_controller" 00:33:47.063 }' 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:47.063 19:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.063 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:47.063 ... 00:33:47.063 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:47.063 ... 00:33:47.063 fio-3.35 00:33:47.063 Starting 4 threads 00:33:52.336 00:33:52.336 filename0: (groupid=0, jobs=1): err= 0: pid=3906776: Wed Nov 20 19:11:14 2024 00:33:52.336 read: IOPS=2805, BW=21.9MiB/s (23.0MB/s)(110MiB/5002msec) 00:33:52.336 slat (usec): min=6, max=199, avg= 8.64, stdev= 3.30 00:33:52.336 clat (usec): min=662, max=43921, avg=2825.99, stdev=1062.49 00:33:52.336 lat (usec): min=673, max=43950, avg=2834.63, stdev=1062.57 00:33:52.336 clat percentiles (usec): 00:33:52.336 | 1.00th=[ 1811], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:33:52.336 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 2966], 00:33:52.336 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3425], 00:33:52.336 | 99.00th=[ 4015], 99.50th=[ 4293], 99.90th=[ 5014], 99.95th=[43779], 00:33:52.336 | 99.99th=[43779] 00:33:52.336 bw ( KiB/s): min=20256, max=23952, per=26.48%, avg=22572.44, stdev=1202.71, samples=9 00:33:52.336 iops : min= 2532, max= 2994, avg=2821.56, stdev=150.34, samples=9 00:33:52.336 lat (usec) : 750=0.01%, 1000=0.01% 00:33:52.336 lat (msec) : 2=2.43%, 4=96.51%, 10=0.99%, 50=0.06% 00:33:52.336 cpu : usr=95.56%, sys=4.12%, ctx=8, majf=0, minf=9 00:33:52.336 IO depths : 1=0.3%, 2=5.7%, 4=64.9%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.336 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.336 issued rwts: total=14034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.336 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.336 filename0: (groupid=0, jobs=1): err= 0: pid=3906777: Wed Nov 20 19:11:14 2024 00:33:52.336 read: IOPS=2647, BW=20.7MiB/s (21.7MB/s)(103MiB/5001msec) 00:33:52.336 slat (nsec): min=5994, max=36163, avg=8841.04, stdev=3003.32 00:33:52.336 clat (usec): min=897, max=5943, avg=2995.88, stdev=475.45 00:33:52.336 lat (usec): min=903, max=5950, avg=3004.72, stdev=475.25 00:33:52.336 clat percentiles (usec): 00:33:52.336 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:33:52.336 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:33:52.336 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3851], 00:33:52.336 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5211], 99.95th=[ 5473], 00:33:52.336 | 99.99th=[ 5932] 00:33:52.336 bw ( KiB/s): min=19792, max=22144, per=24.82%, avg=21163.89, stdev=865.21, samples=9 00:33:52.336 iops : min= 2474, max= 2768, avg=2645.44, stdev=108.13, samples=9 00:33:52.336 lat (usec) : 1000=0.06% 00:33:52.336 lat (msec) : 2=1.00%, 4=95.21%, 10=3.72% 00:33:52.336 cpu : usr=95.80%, sys=3.90%, ctx=9, majf=0, minf=9 00:33:52.336 IO depths : 1=0.2%, 2=4.2%, 4=66.8%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.336 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.336 issued rwts: total=13241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.336 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.336 filename1: (groupid=0, jobs=1): err= 0: pid=3906778: Wed Nov 20 19:11:14 2024 00:33:52.336 read: IOPS=2652, BW=20.7MiB/s (21.7MB/s)(104MiB/5002msec) 00:33:52.336 slat (nsec): min=6029, max=27726, avg=8676.52, stdev=2970.34 00:33:52.336 clat (usec): min=753, max=5515, avg=2991.14, stdev=470.07 00:33:52.336 lat (usec): min=764, max=5521, avg=2999.82, stdev=469.84 00:33:52.336 clat percentiles (usec): 00:33:52.336 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2671], 00:33:52.336 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:33:52.336 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3818], 00:33:52.336 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5276], 00:33:52.336 | 99.99th=[ 5538] 00:33:52.336 bw ( KiB/s): min=20112, max=21968, per=24.96%, avg=21280.00, stdev=664.77, samples=9 00:33:52.336 iops : min= 2514, max= 2746, avg=2660.00, stdev=83.10, samples=9 00:33:52.336 lat (usec) : 1000=0.08% 00:33:52.336 lat (msec) : 2=1.27%, 4=95.14%, 10=3.50% 00:33:52.336 cpu : usr=95.92%, sys=3.76%, ctx=7, majf=0, minf=9 00:33:52.336 IO depths : 1=0.1%, 2=2.2%, 4=68.5%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.336 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.336 issued rwts: total=13269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.336 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.336 filename1: (groupid=0, jobs=1): err= 0: pid=3906779: Wed Nov 20 19:11:14 2024 00:33:52.336 read: IOPS=2551, BW=19.9MiB/s (20.9MB/s)(99.7MiB/5001msec) 00:33:52.336 slat (nsec): min=6039, max=65072, avg=8686.08, stdev=3116.80 00:33:52.336 clat (usec): min=628, max=5640, avg=3109.40, stdev=457.97 00:33:52.336 lat (usec): min=636, max=5653, avg=3118.09, stdev=457.63 00:33:52.336 clat percentiles (usec): 00:33:52.336 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2671], 20.00th=[ 2900], 00:33:52.336 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:33:52.336 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3654], 95.00th=[ 3949], 00:33:52.336 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5342], 00:33:52.336 | 99.99th=[ 5538] 00:33:52.337 bw ( KiB/s): min=19568, max=21488, per=23.87%, avg=20350.22, stdev=699.19, samples=9 00:33:52.337 iops : min= 2446, max= 2686, avg=2543.78, stdev=87.40, samples=9 00:33:52.337 lat (usec) : 750=0.01%, 1000=0.01% 00:33:52.337 lat (msec) : 2=0.67%, 4=94.82%, 10=4.49% 00:33:52.337 cpu : usr=96.22%, sys=3.48%, ctx=5, majf=0, minf=9 00:33:52.337 IO depths : 1=0.2%, 2=2.4%, 4=70.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.337 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.337 issued rwts: total=12762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.337 00:33:52.337 Run status group 0 (all jobs): 00:33:52.337 READ: bw=83.3MiB/s (87.3MB/s), 19.9MiB/s-21.9MiB/s (20.9MB/s-23.0MB/s), io=416MiB (437MB), run=5001-5002msec 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 00:33:52.337 real 0m24.477s 00:33:52.337 user 4m51.903s 00:33:52.337 sys 0m5.165s 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 ************************************ 00:33:52.337 END TEST fio_dif_rand_params 00:33:52.337 ************************************ 00:33:52.337 19:11:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:52.337 19:11:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:52.337 19:11:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 ************************************ 00:33:52.337 START TEST fio_dif_digest 00:33:52.337 ************************************ 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 bdev_null0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 [2024-11-20 19:11:14.555415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.337 { 00:33:52.337 "params": { 00:33:52.337 "name": "Nvme$subsystem", 00:33:52.337 "trtype": "$TEST_TRANSPORT", 00:33:52.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.337 "adrfam": "ipv4", 00:33:52.337 "trsvcid": "$NVMF_PORT", 00:33:52.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.337 "hdgst": ${hdgst:-false}, 00:33:52.337 "ddgst": ${ddgst:-false} 00:33:52.337 }, 00:33:52.337 "method": "bdev_nvme_attach_controller" 00:33:52.337 } 00:33:52.337 EOF 00:33:52.337 )") 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.337 "params": { 00:33:52.337 "name": "Nvme0", 00:33:52.337 "trtype": "tcp", 00:33:52.337 "traddr": "10.0.0.2", 00:33:52.337 "adrfam": "ipv4", 00:33:52.337 "trsvcid": "4420", 00:33:52.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.337 "hdgst": true, 00:33:52.337 "ddgst": true 00:33:52.337 }, 00:33:52.337 "method": "bdev_nvme_attach_controller" 00:33:52.337 }' 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.337 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.338 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.338 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:52.338 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.338 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.338 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.338 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:52.338 19:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.904 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:52.904 ... 00:33:52.904 fio-3.35 00:33:52.904 Starting 3 threads 00:34:05.108 00:34:05.108 filename0: (groupid=0, jobs=1): err= 0: pid=3908006: Wed Nov 20 19:11:25 2024 00:34:05.108 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(364MiB/10046msec) 00:34:05.108 slat (nsec): min=6294, max=33688, avg=11491.41, stdev=1829.17 00:34:05.108 clat (usec): min=6479, max=52525, avg=10328.39, stdev=1317.19 00:34:05.108 lat (usec): min=6491, max=52538, avg=10339.88, stdev=1317.18 00:34:05.108 clat percentiles (usec): 00:34:05.108 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:34:05.108 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:34:05.108 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:34:05.108 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13566], 99.95th=[47973], 00:34:05.108 | 99.99th=[52691] 00:34:05.108 bw ( KiB/s): min=35328, max=39168, per=35.37%, avg=37222.40, stdev=911.36, samples=20 00:34:05.108 iops : min= 276, max= 306, avg=290.80, stdev= 7.12, samples=20 00:34:05.108 lat (msec) : 10=34.33%, 20=65.60%, 50=0.03%, 100=0.03% 00:34:05.108 cpu : usr=94.40%, sys=5.31%, ctx=28, majf=0, minf=108 00:34:05.108 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.108 issued rwts: total=2910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.108 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:05.108 filename0: (groupid=0, jobs=1): err= 0: pid=3908007: Wed Nov 20 19:11:25 2024 00:34:05.108 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(327MiB/10043msec) 00:34:05.108 slat (nsec): min=6299, max=29106, avg=11449.74, stdev=1612.29 00:34:05.108 clat (usec): min=6783, max=47417, avg=11492.34, stdev=1293.68 00:34:05.108 lat (usec): min=6794, max=47429, avg=11503.79, stdev=1293.67 00:34:05.108 clat percentiles (usec): 00:34:05.108 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:34:05.108 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:34:05.108 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:34:05.108 | 99.00th=[13566], 99.50th=[13698], 99.90th=[15270], 99.95th=[45351], 00:34:05.108 | 99.99th=[47449] 00:34:05.108 bw ( KiB/s): min=32512, max=34816, per=31.78%, avg=33446.40, stdev=665.88, samples=20 00:34:05.108 iops : min= 254, max= 272, avg=261.30, stdev= 5.20, samples=20 00:34:05.108 lat (msec) : 10=3.10%, 20=96.83%, 50=0.08% 00:34:05.108 cpu : usr=94.99%, sys=4.71%, ctx=19, majf=0, minf=67 00:34:05.108 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.108 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.108 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:05.108 filename0: (groupid=0, jobs=1): err= 0: pid=3908008: Wed Nov 20 19:11:25 2024 00:34:05.108 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(342MiB/10043msec) 00:34:05.108 slat (nsec): min=6314, max=29552, avg=11455.70, stdev=1806.45 00:34:05.108 clat (usec): min=8567, max=52557, avg=10987.52, stdev=1847.02 00:34:05.108 lat (usec): min=8577, max=52570, avg=10998.98, stdev=1846.94 00:34:05.108 clat percentiles (usec): 00:34:05.108 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:34:05.108 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:34:05.108 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:34:05.108 | 99.00th=[12780], 99.50th=[13173], 99.90th=[50594], 99.95th=[50594], 00:34:05.108 | 99.99th=[52691] 00:34:05.108 bw ( KiB/s): min=32000, max=35840, per=33.24%, avg=34982.40, stdev=852.20, samples=20 00:34:05.108 iops : min= 250, max= 280, avg=273.30, stdev= 6.66, samples=20 00:34:05.108 lat (msec) : 10=10.82%, 20=88.99%, 50=0.04%, 100=0.15% 00:34:05.108 cpu : usr=94.68%, sys=5.03%, ctx=12, majf=0, minf=82 00:34:05.108 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.108 issued rwts: total=2735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.108 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:05.108 00:34:05.108 Run status group 0 (all jobs): 00:34:05.108 READ: bw=103MiB/s (108MB/s), 32.5MiB/s-36.2MiB/s (34.1MB/s-38.0MB/s), io=1033MiB (1083MB), run=10043-10046msec 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.108 00:34:05.108 real 0m11.312s 00:34:05.108 user 0m35.853s 00:34:05.108 sys 0m1.871s 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.108 19:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:05.108 ************************************ 00:34:05.108 END TEST fio_dif_digest 00:34:05.108 ************************************ 00:34:05.108 19:11:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:05.108 19:11:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:05.108 19:11:25 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.108 19:11:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:05.108 19:11:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.108 19:11:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:05.108 19:11:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.108 19:11:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.108 rmmod nvme_tcp 00:34:05.109 rmmod nvme_fabrics 00:34:05.109 rmmod nvme_keyring 00:34:05.109 19:11:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.109 19:11:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:05.109 19:11:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:05.109 19:11:25 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3899450 ']' 00:34:05.109 19:11:25 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3899450 00:34:05.109 19:11:25 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3899450 ']' 00:34:05.109 19:11:25 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3899450 00:34:05.109 19:11:25 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:05.109 19:11:25 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.109 19:11:25 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3899450 00:34:05.109 19:11:26 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:05.109 19:11:26 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:05.109 19:11:26 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3899450' 00:34:05.109 killing process with pid 3899450 00:34:05.109 19:11:26 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3899450 00:34:05.109 19:11:26 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3899450 00:34:05.109 19:11:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:05.109 19:11:26 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:07.022 Waiting for block devices as requested 00:34:07.022 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:07.022 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:07.022 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:07.022 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:07.022 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:07.022 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:07.286 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:07.286 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:07.286 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:07.544 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:07.544 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:07.544 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:07.803 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:07.803 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:07.803 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:07.803 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:08.062 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:08.062 19:11:30 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.062 19:11:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:08.062 19:11:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.595 19:11:32 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.595 00:34:10.595 real 1m14.421s 00:34:10.595 user 7m10.003s 00:34:10.595 sys 0m20.925s 00:34:10.595 19:11:32 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.595 19:11:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.595 ************************************ 00:34:10.595 END TEST nvmf_dif 00:34:10.595 ************************************ 00:34:10.595 19:11:32 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:10.595 19:11:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:10.595 19:11:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.595 19:11:32 -- common/autotest_common.sh@10 -- # set +x 00:34:10.595 ************************************ 00:34:10.595 START TEST nvmf_abort_qd_sizes 00:34:10.595 ************************************ 00:34:10.595 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:10.596 * Looking for test storage... 00:34:10.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.596 --rc genhtml_branch_coverage=1 00:34:10.596 --rc genhtml_function_coverage=1 00:34:10.596 --rc genhtml_legend=1 00:34:10.596 --rc geninfo_all_blocks=1 00:34:10.596 --rc geninfo_unexecuted_blocks=1 00:34:10.596 00:34:10.596 ' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.596 --rc genhtml_branch_coverage=1 00:34:10.596 --rc genhtml_function_coverage=1 00:34:10.596 --rc genhtml_legend=1 00:34:10.596 --rc geninfo_all_blocks=1 00:34:10.596 --rc geninfo_unexecuted_blocks=1 00:34:10.596 00:34:10.596 ' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.596 --rc genhtml_branch_coverage=1 00:34:10.596 --rc genhtml_function_coverage=1 00:34:10.596 --rc genhtml_legend=1 00:34:10.596 --rc geninfo_all_blocks=1 00:34:10.596 --rc geninfo_unexecuted_blocks=1 00:34:10.596 00:34:10.596 ' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.596 --rc genhtml_branch_coverage=1 00:34:10.596 --rc genhtml_function_coverage=1 00:34:10.596 --rc genhtml_legend=1 00:34:10.596 --rc geninfo_all_blocks=1 00:34:10.596 --rc geninfo_unexecuted_blocks=1 00:34:10.596 00:34:10.596 ' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.596 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:10.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.597 19:11:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:17.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:17.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:17.168 Found net devices under 0000:86:00.0: cvl_0_0 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:17.168 Found net devices under 0000:86:00.1: cvl_0_1 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:34:17.168 00:34:17.168 --- 10.0.0.2 ping statistics --- 00:34:17.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.168 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:34:17.168 00:34:17.168 --- 10.0.0.1 ping statistics --- 00:34:17.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.168 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:17.168 19:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:19.076 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:19.076 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:19.335 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:19.335 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:19.335 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:19.335 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:20.713 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3915863 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3915863 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3915863 ']' 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:20.714 19:11:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.714 [2024-11-20 19:11:42.960852] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:34:20.714 [2024-11-20 19:11:42.960897] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.972 [2024-11-20 19:11:43.041278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.972 [2024-11-20 19:11:43.084032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.972 [2024-11-20 19:11:43.084068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.972 [2024-11-20 19:11:43.084077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.972 [2024-11-20 19:11:43.084084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.972 [2024-11-20 19:11:43.084091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.972 [2024-11-20 19:11:43.085628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.972 [2024-11-20 19:11:43.085665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.972 [2024-11-20 19:11:43.085772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.972 [2024-11-20 19:11:43.085774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:20.972 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:20.973 19:11:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.973 ************************************ 00:34:20.973 START TEST spdk_target_abort 00:34:20.973 ************************************ 00:34:20.973 19:11:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:20.973 19:11:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:20.973 19:11:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:20.973 19:11:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.973 19:11:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.268 spdk_targetn1 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.268 [2024-11-20 19:11:46.113331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.268 [2024-11-20 19:11:46.165657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:24.268 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:24.269 19:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:27.550 Initializing NVMe Controllers 00:34:27.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:27.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:27.550 Initialization complete. Launching workers. 00:34:27.550 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16627, failed: 0 00:34:27.550 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1312, failed to submit 15315 00:34:27.550 success 763, unsuccessful 549, failed 0 00:34:27.550 19:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:27.550 19:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:30.826 Initializing NVMe Controllers 00:34:30.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:30.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:30.826 Initialization complete. Launching workers. 00:34:30.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8649, failed: 0 00:34:30.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7408 00:34:30.826 success 338, unsuccessful 903, failed 0 00:34:30.826 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:30.826 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:34.108 Initializing NVMe Controllers 00:34:34.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:34.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:34.108 Initialization complete. Launching workers. 00:34:34.108 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38903, failed: 0 00:34:34.108 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2790, failed to submit 36113 00:34:34.108 success 613, unsuccessful 2177, failed 0 00:34:34.108 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:34.108 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.108 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:34.108 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.108 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:34.108 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.108 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3915863 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3915863 ']' 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3915863 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3915863 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3915863' 00:34:35.561 killing process with pid 3915863 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3915863 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3915863 00:34:35.561 00:34:35.561 real 0m14.486s 00:34:35.561 user 0m55.335s 00:34:35.561 sys 0m2.564s 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.561 ************************************ 00:34:35.561 END TEST spdk_target_abort 00:34:35.561 ************************************ 00:34:35.561 19:11:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:35.561 19:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:35.561 19:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.561 19:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.561 ************************************ 00:34:35.561 START TEST kernel_target_abort 00:34:35.561 ************************************ 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:35.561 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:35.842 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:35.842 19:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:38.378 Waiting for block devices as requested 00:34:38.378 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:38.637 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:38.637 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:38.637 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:38.637 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:38.896 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:38.896 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:38.896 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:39.154 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:39.154 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:39.154 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:39.413 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:39.413 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:39.413 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:39.413 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:39.672 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:39.673 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:39.673 19:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:39.932 No valid GPT data, bailing 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:39.932 00:34:39.932 Discovery Log Number of Records 2, Generation counter 2 00:34:39.932 =====Discovery Log Entry 0====== 00:34:39.932 trtype: tcp 00:34:39.932 adrfam: ipv4 00:34:39.932 subtype: current discovery subsystem 00:34:39.932 treq: not specified, sq flow control disable supported 00:34:39.932 portid: 1 00:34:39.932 trsvcid: 4420 00:34:39.932 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:39.932 traddr: 10.0.0.1 00:34:39.932 eflags: none 00:34:39.932 sectype: none 00:34:39.932 =====Discovery Log Entry 1====== 00:34:39.932 trtype: tcp 00:34:39.932 adrfam: ipv4 00:34:39.932 subtype: nvme subsystem 00:34:39.932 treq: not specified, sq flow control disable supported 00:34:39.932 portid: 1 00:34:39.932 trsvcid: 4420 00:34:39.932 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:39.932 traddr: 10.0.0.1 00:34:39.932 eflags: none 00:34:39.932 sectype: none 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:39.932 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:39.933 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:39.933 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:39.933 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:39.933 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:39.933 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.933 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:39.933 19:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:43.221 Initializing NVMe Controllers 00:34:43.221 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:43.221 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:43.221 Initialization complete. Launching workers. 00:34:43.221 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94335, failed: 0 00:34:43.221 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94335, failed to submit 0 00:34:43.221 success 0, unsuccessful 94335, failed 0 00:34:43.221 19:12:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:43.221 19:12:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:46.508 Initializing NVMe Controllers 00:34:46.508 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:46.508 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:46.508 Initialization complete. Launching workers. 00:34:46.508 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 147404, failed: 0 00:34:46.508 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37182, failed to submit 110222 00:34:46.508 success 0, unsuccessful 37182, failed 0 00:34:46.508 19:12:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:46.508 19:12:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:49.814 Initializing NVMe Controllers 00:34:49.814 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:49.814 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:49.814 Initialization complete. Launching workers. 00:34:49.814 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139753, failed: 0 00:34:49.814 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35018, failed to submit 104735 00:34:49.814 success 0, unsuccessful 35018, failed 0 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:49.814 19:12:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:52.353 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:52.353 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:53.733 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:53.733 00:34:53.733 real 0m18.086s 00:34:53.733 user 0m9.103s 00:34:53.733 sys 0m5.149s 00:34:53.733 19:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.733 19:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:53.733 ************************************ 00:34:53.733 END TEST kernel_target_abort 00:34:53.733 ************************************ 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:53.733 19:12:15 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:53.733 rmmod nvme_tcp 00:34:53.733 rmmod nvme_fabrics 00:34:53.733 rmmod nvme_keyring 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3915863 ']' 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3915863 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3915863 ']' 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3915863 00:34:53.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3915863) - No such process 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3915863 is not found' 00:34:53.733 Process with pid 3915863 is not found 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:53.733 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:57.021 Waiting for block devices as requested 00:34:57.021 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:57.021 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:57.021 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:57.021 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:57.021 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:57.021 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:57.021 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:57.021 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:57.280 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:57.280 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:57.280 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:57.540 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:57.540 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:57.540 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:57.798 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:57.798 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:57.798 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:58.058 19:12:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.965 19:12:22 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:59.965 00:34:59.965 real 0m49.780s 00:34:59.965 user 1m8.759s 00:34:59.965 sys 0m16.535s 00:34:59.965 19:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.965 19:12:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:59.965 ************************************ 00:34:59.965 END TEST nvmf_abort_qd_sizes 00:34:59.965 ************************************ 00:34:59.965 19:12:22 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:59.965 19:12:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:59.965 19:12:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.965 19:12:22 -- common/autotest_common.sh@10 -- # set +x 00:34:59.965 ************************************ 00:34:59.965 START TEST keyring_file 00:34:59.965 ************************************ 00:34:59.965 19:12:22 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:00.225 * Looking for test storage... 00:35:00.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.225 19:12:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.225 --rc genhtml_branch_coverage=1 00:35:00.225 --rc genhtml_function_coverage=1 00:35:00.225 --rc genhtml_legend=1 00:35:00.225 --rc geninfo_all_blocks=1 00:35:00.225 --rc geninfo_unexecuted_blocks=1 00:35:00.225 00:35:00.225 ' 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.225 --rc genhtml_branch_coverage=1 00:35:00.225 --rc genhtml_function_coverage=1 00:35:00.225 --rc genhtml_legend=1 00:35:00.225 --rc geninfo_all_blocks=1 00:35:00.225 --rc geninfo_unexecuted_blocks=1 00:35:00.225 00:35:00.225 ' 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.225 --rc genhtml_branch_coverage=1 00:35:00.225 --rc genhtml_function_coverage=1 00:35:00.225 --rc genhtml_legend=1 00:35:00.225 --rc geninfo_all_blocks=1 00:35:00.225 --rc geninfo_unexecuted_blocks=1 00:35:00.225 00:35:00.225 ' 00:35:00.225 19:12:22 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.225 --rc genhtml_branch_coverage=1 00:35:00.225 --rc genhtml_function_coverage=1 00:35:00.225 --rc genhtml_legend=1 00:35:00.225 --rc geninfo_all_blocks=1 00:35:00.225 --rc geninfo_unexecuted_blocks=1 00:35:00.225 00:35:00.225 ' 00:35:00.225 19:12:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:00.225 19:12:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:00.225 19:12:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.226 19:12:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.226 19:12:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.226 19:12:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.226 19:12:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.226 19:12:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.226 19:12:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.226 19:12:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.226 19:12:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:00.226 19:12:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.j45hn0QF98 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.j45hn0QF98 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.j45hn0QF98 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.j45hn0QF98 00:35:00.226 19:12:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.32Lck1qwcl 00:35:00.226 19:12:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:00.226 19:12:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:00.485 19:12:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.32Lck1qwcl 00:35:00.485 19:12:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.32Lck1qwcl 00:35:00.485 19:12:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.32Lck1qwcl 00:35:00.485 19:12:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:00.485 19:12:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=3925156 00:35:00.485 19:12:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3925156 00:35:00.485 19:12:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3925156 ']' 00:35:00.485 19:12:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.485 19:12:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.485 19:12:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.485 19:12:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.485 19:12:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:00.485 [2024-11-20 19:12:22.612831] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:35:00.485 [2024-11-20 19:12:22.612882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925156 ] 00:35:00.485 [2024-11-20 19:12:22.671507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.485 [2024-11-20 19:12:22.714649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:00.744 19:12:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:00.744 [2024-11-20 19:12:22.942336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.744 null0 00:35:00.744 [2024-11-20 19:12:22.974392] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:00.744 [2024-11-20 19:12:22.974758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.744 19:12:22 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.744 19:12:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:00.744 [2024-11-20 19:12:23.002458] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:00.744 request: 00:35:00.744 { 00:35:00.744 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:00.744 "secure_channel": false, 00:35:00.744 "listen_address": { 00:35:00.744 "trtype": "tcp", 00:35:00.744 "traddr": "127.0.0.1", 00:35:00.744 "trsvcid": "4420" 00:35:00.744 }, 00:35:00.744 "method": "nvmf_subsystem_add_listener", 00:35:00.744 "req_id": 1 00:35:00.744 } 00:35:00.744 Got JSON-RPC error response 00:35:00.744 response: 00:35:00.744 { 00:35:00.745 "code": -32602, 00:35:00.745 "message": "Invalid parameters" 00:35:00.745 } 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:00.745 19:12:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=3925160 00:35:00.745 19:12:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3925160 /var/tmp/bperf.sock 00:35:00.745 19:12:23 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3925160 ']' 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.745 19:12:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:00.745 [2024-11-20 19:12:23.056810] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:35:00.745 [2024-11-20 19:12:23.056852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925160 ] 00:35:01.004 [2024-11-20 19:12:23.132832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.004 [2024-11-20 19:12:23.174173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.004 19:12:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:01.004 19:12:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:01.004 19:12:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:01.004 19:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:01.265 19:12:23 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.32Lck1qwcl 00:35:01.265 19:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.32Lck1qwcl 00:35:01.527 19:12:23 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:01.527 19:12:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:01.527 19:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:01.527 19:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.527 19:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:01.527 19:12:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.j45hn0QF98 == \/\t\m\p\/\t\m\p\.\j\4\5\h\n\0\Q\F\9\8 ]] 00:35:01.527 19:12:23 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:01.527 19:12:23 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:01.527 19:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:01.527 19:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:01.527 19:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.786 19:12:24 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.32Lck1qwcl == \/\t\m\p\/\t\m\p\.\3\2\L\c\k\1\q\w\c\l ]] 00:35:01.786 19:12:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:01.786 19:12:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:01.786 19:12:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:01.786 19:12:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:01.786 19:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.786 19:12:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:02.045 19:12:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:02.045 19:12:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:02.045 19:12:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:02.045 19:12:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:02.045 19:12:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.045 19:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.045 19:12:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:02.304 19:12:24 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:02.304 19:12:24 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.304 19:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.304 [2024-11-20 19:12:24.609405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:02.563 nvme0n1 00:35:02.563 19:12:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:02.563 19:12:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:02.563 19:12:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:02.563 19:12:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:02.563 19:12:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.563 19:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.821 19:12:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:02.821 19:12:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:02.821 19:12:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:02.821 19:12:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:02.822 19:12:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.822 19:12:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:02.822 19:12:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.822 19:12:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:02.822 19:12:25 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:03.080 Running I/O for 1 seconds... 00:35:04.019 19358.00 IOPS, 75.62 MiB/s 00:35:04.019 Latency(us) 00:35:04.019 [2024-11-20T18:12:26.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.019 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:04.019 nvme0n1 : 1.00 19403.25 75.79 0.00 0.00 6585.09 2590.23 14979.66 00:35:04.019 [2024-11-20T18:12:26.344Z] =================================================================================================================== 00:35:04.019 [2024-11-20T18:12:26.344Z] Total : 19403.25 75.79 0.00 0.00 6585.09 2590.23 14979.66 00:35:04.019 { 00:35:04.019 "results": [ 00:35:04.019 { 00:35:04.019 "job": "nvme0n1", 00:35:04.019 "core_mask": "0x2", 00:35:04.019 "workload": "randrw", 00:35:04.019 "percentage": 50, 00:35:04.019 "status": "finished", 00:35:04.019 "queue_depth": 128, 00:35:04.019 "io_size": 4096, 00:35:04.019 "runtime": 1.004368, 00:35:04.019 "iops": 19403.246618769215, 00:35:04.019 "mibps": 75.79393210456725, 00:35:04.019 "io_failed": 0, 00:35:04.019 "io_timeout": 0, 00:35:04.019 "avg_latency_us": 6585.089461255767, 00:35:04.019 "min_latency_us": 2590.232380952381, 00:35:04.019 "max_latency_us": 14979.657142857142 00:35:04.019 } 00:35:04.019 ], 00:35:04.019 "core_count": 1 00:35:04.019 } 00:35:04.019 19:12:26 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:04.019 19:12:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:04.278 19:12:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.278 19:12:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:04.278 19:12:26 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:04.278 19:12:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.537 19:12:26 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:04.537 19:12:26 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:04.537 19:12:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:04.537 19:12:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:04.537 19:12:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:04.537 19:12:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.537 19:12:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:04.537 19:12:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.537 19:12:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:04.537 19:12:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:04.796 [2024-11-20 19:12:26.976332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:04.796 [2024-11-20 19:12:26.976526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff91f0 (107): Transport endpoint is not connected 00:35:04.796 [2024-11-20 19:12:26.977521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff91f0 (9): Bad file descriptor 00:35:04.796 [2024-11-20 19:12:26.978522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:04.796 [2024-11-20 19:12:26.978531] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:04.796 [2024-11-20 19:12:26.978538] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:04.796 [2024-11-20 19:12:26.978546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:04.796 request: 00:35:04.796 { 00:35:04.796 "name": "nvme0", 00:35:04.796 "trtype": "tcp", 00:35:04.796 "traddr": "127.0.0.1", 00:35:04.796 "adrfam": "ipv4", 00:35:04.796 "trsvcid": "4420", 00:35:04.796 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:04.796 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:04.796 "prchk_reftag": false, 00:35:04.796 "prchk_guard": false, 00:35:04.796 "hdgst": false, 00:35:04.796 "ddgst": false, 00:35:04.796 "psk": "key1", 00:35:04.796 "allow_unrecognized_csi": false, 00:35:04.796 "method": "bdev_nvme_attach_controller", 00:35:04.796 "req_id": 1 00:35:04.796 } 00:35:04.796 Got JSON-RPC error response 00:35:04.796 response: 00:35:04.796 { 00:35:04.796 "code": -5, 00:35:04.796 "message": "Input/output error" 00:35:04.796 } 00:35:04.796 19:12:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:04.796 19:12:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:04.796 19:12:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:04.796 19:12:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:04.796 19:12:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:04.796 19:12:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:04.796 19:12:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.796 19:12:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.796 19:12:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.796 19:12:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.055 19:12:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:05.055 19:12:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:05.055 19:12:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:05.055 19:12:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.055 19:12:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.055 19:12:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.055 19:12:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.314 19:12:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:05.314 19:12:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:05.314 19:12:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:05.314 19:12:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:05.314 19:12:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:05.572 19:12:27 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:05.572 19:12:27 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:05.572 19:12:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.831 19:12:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:05.831 19:12:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.j45hn0QF98 00:35:05.831 19:12:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:05.831 19:12:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:05.831 19:12:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:05.831 19:12:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:05.831 19:12:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.831 19:12:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:05.831 19:12:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.831 19:12:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:05.831 19:12:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:05.831 [2024-11-20 19:12:28.099473] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.j45hn0QF98': 0100660 00:35:05.831 [2024-11-20 19:12:28.099497] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:05.831 request: 00:35:05.831 { 00:35:05.831 "name": "key0", 00:35:05.831 "path": "/tmp/tmp.j45hn0QF98", 00:35:05.831 "method": "keyring_file_add_key", 00:35:05.831 "req_id": 1 00:35:05.831 } 00:35:05.831 Got JSON-RPC error response 00:35:05.831 response: 00:35:05.831 { 00:35:05.831 "code": -1, 00:35:05.831 "message": "Operation not permitted" 00:35:05.831 } 00:35:05.831 19:12:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:05.831 19:12:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:05.831 19:12:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:05.831 19:12:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:05.831 19:12:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.j45hn0QF98 00:35:05.831 19:12:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:05.831 19:12:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j45hn0QF98 00:35:06.090 19:12:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.j45hn0QF98 00:35:06.090 19:12:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:06.090 19:12:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.090 19:12:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.090 19:12:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.090 19:12:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.090 19:12:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.349 19:12:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:06.349 19:12:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.349 19:12:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:06.349 19:12:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.349 19:12:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:06.349 19:12:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.349 19:12:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:06.349 19:12:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.349 19:12:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.349 19:12:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.608 [2024-11-20 19:12:28.672995] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.j45hn0QF98': No such file or directory 00:35:06.608 [2024-11-20 19:12:28.673015] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:06.608 [2024-11-20 19:12:28.673032] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:06.608 [2024-11-20 19:12:28.673039] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:06.608 [2024-11-20 19:12:28.673050] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:06.608 [2024-11-20 19:12:28.673056] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:06.608 request: 00:35:06.608 { 00:35:06.608 "name": "nvme0", 00:35:06.608 "trtype": "tcp", 00:35:06.608 "traddr": "127.0.0.1", 00:35:06.608 "adrfam": "ipv4", 00:35:06.608 "trsvcid": "4420", 00:35:06.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:06.608 "prchk_reftag": false, 00:35:06.608 "prchk_guard": false, 00:35:06.608 "hdgst": false, 00:35:06.608 "ddgst": false, 00:35:06.608 "psk": "key0", 00:35:06.608 "allow_unrecognized_csi": false, 00:35:06.608 "method": "bdev_nvme_attach_controller", 00:35:06.608 "req_id": 1 00:35:06.608 } 00:35:06.608 Got JSON-RPC error response 00:35:06.608 response: 00:35:06.608 { 00:35:06.608 "code": -19, 00:35:06.608 "message": "No such device" 00:35:06.608 } 00:35:06.608 19:12:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:06.608 19:12:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.608 19:12:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.608 19:12:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.608 19:12:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:06.608 19:12:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nmDQCdL0KM 00:35:06.608 19:12:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:06.608 19:12:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:06.608 19:12:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:06.608 19:12:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:06.608 19:12:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:06.609 19:12:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:06.609 19:12:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:06.868 19:12:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nmDQCdL0KM 00:35:06.868 19:12:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nmDQCdL0KM 00:35:06.868 19:12:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nmDQCdL0KM 00:35:06.868 19:12:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nmDQCdL0KM 00:35:06.868 19:12:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nmDQCdL0KM 00:35:06.868 19:12:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.868 19:12:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.126 nvme0n1 00:35:07.126 19:12:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:07.126 19:12:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.126 19:12:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.126 19:12:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.126 19:12:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.126 19:12:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.384 19:12:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:07.384 19:12:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:07.384 19:12:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:07.686 19:12:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:07.687 19:12:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:07.687 19:12:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.687 19:12:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.687 19:12:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.969 19:12:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:07.970 19:12:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:07.970 19:12:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.970 19:12:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.970 19:12:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.970 19:12:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.970 19:12:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.970 19:12:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:07.970 19:12:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:07.970 19:12:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:08.229 19:12:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:08.229 19:12:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:08.229 19:12:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.489 19:12:30 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:08.489 19:12:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nmDQCdL0KM 00:35:08.489 19:12:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nmDQCdL0KM 00:35:08.747 19:12:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.32Lck1qwcl 00:35:08.747 19:12:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.32Lck1qwcl 00:35:08.747 19:12:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.747 19:12:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:09.005 nvme0n1 00:35:09.005 19:12:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:09.005 19:12:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:09.265 19:12:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:09.265 "subsystems": [ 00:35:09.265 { 00:35:09.265 "subsystem": "keyring", 00:35:09.265 "config": [ 00:35:09.265 { 00:35:09.265 "method": "keyring_file_add_key", 00:35:09.265 "params": { 00:35:09.265 "name": "key0", 00:35:09.265 "path": "/tmp/tmp.nmDQCdL0KM" 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "keyring_file_add_key", 00:35:09.265 "params": { 00:35:09.265 "name": "key1", 00:35:09.265 "path": "/tmp/tmp.32Lck1qwcl" 00:35:09.265 } 00:35:09.265 } 00:35:09.265 ] 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "subsystem": "iobuf", 00:35:09.265 "config": [ 00:35:09.265 { 00:35:09.265 "method": "iobuf_set_options", 00:35:09.265 "params": { 00:35:09.265 "small_pool_count": 8192, 00:35:09.265 "large_pool_count": 1024, 00:35:09.265 "small_bufsize": 8192, 00:35:09.265 "large_bufsize": 135168, 00:35:09.265 "enable_numa": false 00:35:09.265 } 00:35:09.265 } 00:35:09.265 ] 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "subsystem": "sock", 00:35:09.265 "config": [ 00:35:09.265 { 00:35:09.265 "method": "sock_set_default_impl", 00:35:09.265 "params": { 00:35:09.265 "impl_name": "posix" 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "sock_impl_set_options", 00:35:09.265 "params": { 00:35:09.265 "impl_name": "ssl", 00:35:09.265 "recv_buf_size": 4096, 00:35:09.265 "send_buf_size": 4096, 00:35:09.265 "enable_recv_pipe": true, 00:35:09.265 "enable_quickack": false, 00:35:09.265 "enable_placement_id": 0, 00:35:09.265 "enable_zerocopy_send_server": true, 00:35:09.265 "enable_zerocopy_send_client": false, 00:35:09.265 "zerocopy_threshold": 0, 00:35:09.265 "tls_version": 0, 00:35:09.265 "enable_ktls": false 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "sock_impl_set_options", 00:35:09.265 "params": { 00:35:09.265 "impl_name": "posix", 00:35:09.265 "recv_buf_size": 2097152, 00:35:09.265 "send_buf_size": 2097152, 00:35:09.265 "enable_recv_pipe": true, 00:35:09.265 "enable_quickack": false, 00:35:09.265 "enable_placement_id": 0, 00:35:09.265 "enable_zerocopy_send_server": true, 00:35:09.265 "enable_zerocopy_send_client": false, 00:35:09.265 "zerocopy_threshold": 0, 00:35:09.265 "tls_version": 0, 00:35:09.265 "enable_ktls": false 00:35:09.265 } 00:35:09.265 } 00:35:09.265 ] 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "subsystem": "vmd", 00:35:09.265 "config": [] 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "subsystem": "accel", 00:35:09.265 "config": [ 00:35:09.265 { 00:35:09.265 "method": "accel_set_options", 00:35:09.265 "params": { 00:35:09.265 "small_cache_size": 128, 00:35:09.265 "large_cache_size": 16, 00:35:09.265 "task_count": 2048, 00:35:09.265 "sequence_count": 2048, 00:35:09.265 "buf_count": 2048 00:35:09.265 } 00:35:09.265 } 00:35:09.265 ] 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "subsystem": "bdev", 00:35:09.265 "config": [ 00:35:09.265 { 00:35:09.265 "method": "bdev_set_options", 00:35:09.265 "params": { 00:35:09.265 "bdev_io_pool_size": 65535, 00:35:09.265 "bdev_io_cache_size": 256, 00:35:09.265 "bdev_auto_examine": true, 00:35:09.265 "iobuf_small_cache_size": 128, 00:35:09.265 "iobuf_large_cache_size": 16 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "bdev_raid_set_options", 00:35:09.265 "params": { 00:35:09.265 "process_window_size_kb": 1024, 00:35:09.265 "process_max_bandwidth_mb_sec": 0 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "bdev_iscsi_set_options", 00:35:09.265 "params": { 00:35:09.265 "timeout_sec": 30 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "bdev_nvme_set_options", 00:35:09.265 "params": { 00:35:09.265 "action_on_timeout": "none", 00:35:09.265 "timeout_us": 0, 00:35:09.265 "timeout_admin_us": 0, 00:35:09.265 "keep_alive_timeout_ms": 10000, 00:35:09.265 "arbitration_burst": 0, 00:35:09.265 "low_priority_weight": 0, 00:35:09.265 "medium_priority_weight": 0, 00:35:09.265 "high_priority_weight": 0, 00:35:09.265 "nvme_adminq_poll_period_us": 10000, 00:35:09.265 "nvme_ioq_poll_period_us": 0, 00:35:09.265 "io_queue_requests": 512, 00:35:09.265 "delay_cmd_submit": true, 00:35:09.265 "transport_retry_count": 4, 00:35:09.265 "bdev_retry_count": 3, 00:35:09.265 "transport_ack_timeout": 0, 00:35:09.265 "ctrlr_loss_timeout_sec": 0, 00:35:09.265 "reconnect_delay_sec": 0, 00:35:09.265 "fast_io_fail_timeout_sec": 0, 00:35:09.265 "disable_auto_failback": false, 00:35:09.265 "generate_uuids": false, 00:35:09.265 "transport_tos": 0, 00:35:09.265 "nvme_error_stat": false, 00:35:09.265 "rdma_srq_size": 0, 00:35:09.265 "io_path_stat": false, 00:35:09.265 "allow_accel_sequence": false, 00:35:09.265 "rdma_max_cq_size": 0, 00:35:09.265 "rdma_cm_event_timeout_ms": 0, 00:35:09.265 "dhchap_digests": [ 00:35:09.265 "sha256", 00:35:09.265 "sha384", 00:35:09.265 "sha512" 00:35:09.265 ], 00:35:09.265 "dhchap_dhgroups": [ 00:35:09.265 "null", 00:35:09.265 "ffdhe2048", 00:35:09.265 "ffdhe3072", 00:35:09.265 "ffdhe4096", 00:35:09.265 "ffdhe6144", 00:35:09.265 "ffdhe8192" 00:35:09.265 ] 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "bdev_nvme_attach_controller", 00:35:09.265 "params": { 00:35:09.265 "name": "nvme0", 00:35:09.265 "trtype": "TCP", 00:35:09.265 "adrfam": "IPv4", 00:35:09.265 "traddr": "127.0.0.1", 00:35:09.265 "trsvcid": "4420", 00:35:09.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.265 "prchk_reftag": false, 00:35:09.265 "prchk_guard": false, 00:35:09.265 "ctrlr_loss_timeout_sec": 0, 00:35:09.265 "reconnect_delay_sec": 0, 00:35:09.265 "fast_io_fail_timeout_sec": 0, 00:35:09.265 "psk": "key0", 00:35:09.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.265 "hdgst": false, 00:35:09.265 "ddgst": false, 00:35:09.265 "multipath": "multipath" 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "bdev_nvme_set_hotplug", 00:35:09.265 "params": { 00:35:09.265 "period_us": 100000, 00:35:09.265 "enable": false 00:35:09.265 } 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "method": "bdev_wait_for_examine" 00:35:09.265 } 00:35:09.265 ] 00:35:09.265 }, 00:35:09.265 { 00:35:09.265 "subsystem": "nbd", 00:35:09.265 "config": [] 00:35:09.265 } 00:35:09.265 ] 00:35:09.265 }' 00:35:09.265 19:12:31 keyring_file -- keyring/file.sh@115 -- # killprocess 3925160 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3925160 ']' 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3925160 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3925160 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3925160' 00:35:09.265 killing process with pid 3925160 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@973 -- # kill 3925160 00:35:09.265 Received shutdown signal, test time was about 1.000000 seconds 00:35:09.265 00:35:09.265 Latency(us) 00:35:09.265 [2024-11-20T18:12:31.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.265 [2024-11-20T18:12:31.590Z] =================================================================================================================== 00:35:09.265 [2024-11-20T18:12:31.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:09.265 19:12:31 keyring_file -- common/autotest_common.sh@978 -- # wait 3925160 00:35:09.526 19:12:31 keyring_file -- keyring/file.sh@118 -- # bperfpid=3926681 00:35:09.526 19:12:31 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3926681 /var/tmp/bperf.sock 00:35:09.526 19:12:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3926681 ']' 00:35:09.526 19:12:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:09.526 19:12:31 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:09.526 19:12:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.526 19:12:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:09.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:09.526 19:12:31 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:09.526 "subsystems": [ 00:35:09.526 { 00:35:09.526 "subsystem": "keyring", 00:35:09.526 "config": [ 00:35:09.526 { 00:35:09.526 "method": "keyring_file_add_key", 00:35:09.526 "params": { 00:35:09.526 "name": "key0", 00:35:09.526 "path": "/tmp/tmp.nmDQCdL0KM" 00:35:09.526 } 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "method": "keyring_file_add_key", 00:35:09.526 "params": { 00:35:09.526 "name": "key1", 00:35:09.526 "path": "/tmp/tmp.32Lck1qwcl" 00:35:09.526 } 00:35:09.526 } 00:35:09.526 ] 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "subsystem": "iobuf", 00:35:09.526 "config": [ 00:35:09.526 { 00:35:09.526 "method": "iobuf_set_options", 00:35:09.526 "params": { 00:35:09.526 "small_pool_count": 8192, 00:35:09.526 "large_pool_count": 1024, 00:35:09.526 "small_bufsize": 8192, 00:35:09.526 "large_bufsize": 135168, 00:35:09.526 "enable_numa": false 00:35:09.526 } 00:35:09.526 } 00:35:09.526 ] 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "subsystem": "sock", 00:35:09.526 "config": [ 00:35:09.526 { 00:35:09.526 "method": "sock_set_default_impl", 00:35:09.526 "params": { 00:35:09.526 "impl_name": "posix" 00:35:09.526 } 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "method": "sock_impl_set_options", 00:35:09.526 "params": { 00:35:09.526 "impl_name": "ssl", 00:35:09.526 "recv_buf_size": 4096, 00:35:09.526 "send_buf_size": 4096, 00:35:09.526 "enable_recv_pipe": true, 00:35:09.526 "enable_quickack": false, 00:35:09.526 "enable_placement_id": 0, 00:35:09.526 "enable_zerocopy_send_server": true, 00:35:09.526 "enable_zerocopy_send_client": false, 00:35:09.526 "zerocopy_threshold": 0, 00:35:09.526 "tls_version": 0, 00:35:09.526 "enable_ktls": false 00:35:09.526 } 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "method": "sock_impl_set_options", 00:35:09.526 "params": { 00:35:09.526 "impl_name": "posix", 00:35:09.526 "recv_buf_size": 2097152, 00:35:09.526 "send_buf_size": 2097152, 00:35:09.526 "enable_recv_pipe": true, 00:35:09.526 "enable_quickack": false, 00:35:09.526 "enable_placement_id": 0, 00:35:09.526 "enable_zerocopy_send_server": true, 00:35:09.526 "enable_zerocopy_send_client": false, 00:35:09.526 "zerocopy_threshold": 0, 00:35:09.526 "tls_version": 0, 00:35:09.526 "enable_ktls": false 00:35:09.526 } 00:35:09.526 } 00:35:09.526 ] 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "subsystem": "vmd", 00:35:09.526 "config": [] 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "subsystem": "accel", 00:35:09.526 "config": [ 00:35:09.526 { 00:35:09.526 "method": "accel_set_options", 00:35:09.526 "params": { 00:35:09.526 "small_cache_size": 128, 00:35:09.526 "large_cache_size": 16, 00:35:09.526 "task_count": 2048, 00:35:09.526 "sequence_count": 2048, 00:35:09.526 "buf_count": 2048 00:35:09.526 } 00:35:09.526 } 00:35:09.526 ] 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "subsystem": "bdev", 00:35:09.526 "config": [ 00:35:09.526 { 00:35:09.526 "method": "bdev_set_options", 00:35:09.526 "params": { 00:35:09.526 "bdev_io_pool_size": 65535, 00:35:09.526 "bdev_io_cache_size": 256, 00:35:09.526 "bdev_auto_examine": true, 00:35:09.526 "iobuf_small_cache_size": 128, 00:35:09.526 "iobuf_large_cache_size": 16 00:35:09.526 } 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "method": "bdev_raid_set_options", 00:35:09.526 "params": { 00:35:09.526 "process_window_size_kb": 1024, 00:35:09.526 "process_max_bandwidth_mb_sec": 0 00:35:09.526 } 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "method": "bdev_iscsi_set_options", 00:35:09.526 "params": { 00:35:09.526 "timeout_sec": 30 00:35:09.526 } 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "method": "bdev_nvme_set_options", 00:35:09.526 "params": { 00:35:09.526 "action_on_timeout": "none", 00:35:09.526 "timeout_us": 0, 00:35:09.526 "timeout_admin_us": 0, 00:35:09.526 "keep_alive_timeout_ms": 10000, 00:35:09.526 "arbitration_burst": 0, 00:35:09.526 "low_priority_weight": 0, 00:35:09.526 "medium_priority_weight": 0, 00:35:09.526 "high_priority_weight": 0, 00:35:09.526 "nvme_adminq_poll_period_us": 10000, 00:35:09.526 "nvme_ioq_poll_period_us": 0, 00:35:09.526 "io_queue_requests": 512, 00:35:09.526 "delay_cmd_submit": true, 00:35:09.526 "transport_retry_count": 4, 00:35:09.526 "bdev_retry_count": 3, 00:35:09.526 "transport_ack_timeout": 0, 00:35:09.526 "ctrlr_loss_timeout_sec": 0, 00:35:09.526 "reconnect_delay_sec": 0, 00:35:09.526 "fast_io_fail_timeout_sec": 0, 00:35:09.526 "disable_auto_failback": false, 00:35:09.526 "generate_uuids": false, 00:35:09.526 "transport_tos": 0, 00:35:09.526 "nvme_error_stat": false, 00:35:09.526 "rdma_srq_size": 0, 00:35:09.526 "io_path_stat": false, 00:35:09.526 "allow_accel_sequence": false, 00:35:09.526 "rdma_max_cq_size": 0, 00:35:09.526 "rdma_cm_event_timeout_ms": 0, 00:35:09.526 "dhchap_digests": [ 00:35:09.526 "sha256", 00:35:09.526 "sha384", 00:35:09.526 "sha512" 00:35:09.526 ], 00:35:09.526 "dhchap_dhgroups": [ 00:35:09.526 "null", 00:35:09.526 "ffdhe2048", 00:35:09.526 "ffdhe3072", 00:35:09.526 "ffdhe4096", 00:35:09.526 "ffdhe6144", 00:35:09.526 "ffdhe8192" 00:35:09.526 ] 00:35:09.526 } 00:35:09.526 }, 00:35:09.526 { 00:35:09.526 "method": "bdev_nvme_attach_controller", 00:35:09.526 "params": { 00:35:09.527 "name": "nvme0", 00:35:09.527 "trtype": "TCP", 00:35:09.527 "adrfam": "IPv4", 00:35:09.527 "traddr": "127.0.0.1", 00:35:09.527 "trsvcid": "4420", 00:35:09.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.527 "prchk_reftag": false, 00:35:09.527 "prchk_guard": false, 00:35:09.527 "ctrlr_loss_timeout_sec": 0, 00:35:09.527 "reconnect_delay_sec": 0, 00:35:09.527 "fast_io_fail_timeout_sec": 0, 00:35:09.527 "psk": "key0", 00:35:09.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.527 "hdgst": false, 00:35:09.527 "ddgst": false, 00:35:09.527 "multipath": "multipath" 00:35:09.527 } 00:35:09.527 }, 00:35:09.527 { 00:35:09.527 "method": "bdev_nvme_set_hotplug", 00:35:09.527 "params": { 00:35:09.527 "period_us": 100000, 00:35:09.527 "enable": false 00:35:09.527 } 00:35:09.527 }, 00:35:09.527 { 00:35:09.527 "method": "bdev_wait_for_examine" 00:35:09.527 } 00:35:09.527 ] 00:35:09.527 }, 00:35:09.527 { 00:35:09.527 "subsystem": "nbd", 00:35:09.527 "config": [] 00:35:09.527 } 00:35:09.527 ] 00:35:09.527 }' 00:35:09.527 19:12:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.527 19:12:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.527 [2024-11-20 19:12:31.782169] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:35:09.527 [2024-11-20 19:12:31.782236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926681 ] 00:35:09.786 [2024-11-20 19:12:31.857477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.786 [2024-11-20 19:12:31.897391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.786 [2024-11-20 19:12:32.058330] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:10.353 19:12:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.353 19:12:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:10.353 19:12:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:10.353 19:12:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.353 19:12:32 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:10.611 19:12:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:10.611 19:12:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:10.611 19:12:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.611 19:12:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.611 19:12:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.611 19:12:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.612 19:12:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.870 19:12:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:10.870 19:12:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:10.870 19:12:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:10.870 19:12:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.870 19:12:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.870 19:12:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:10.870 19:12:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.129 19:12:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:11.129 19:12:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:11.129 19:12:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:11.129 19:12:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:11.129 19:12:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:11.129 19:12:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:11.129 19:12:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nmDQCdL0KM /tmp/tmp.32Lck1qwcl 00:35:11.129 19:12:33 keyring_file -- keyring/file.sh@20 -- # killprocess 3926681 00:35:11.129 19:12:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3926681 ']' 00:35:11.129 19:12:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3926681 00:35:11.129 19:12:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:11.129 19:12:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.129 19:12:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3926681 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3926681' 00:35:11.389 killing process with pid 3926681 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@973 -- # kill 3926681 00:35:11.389 Received shutdown signal, test time was about 1.000000 seconds 00:35:11.389 00:35:11.389 Latency(us) 00:35:11.389 [2024-11-20T18:12:33.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.389 [2024-11-20T18:12:33.714Z] =================================================================================================================== 00:35:11.389 [2024-11-20T18:12:33.714Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@978 -- # wait 3926681 00:35:11.389 19:12:33 keyring_file -- keyring/file.sh@21 -- # killprocess 3925156 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3925156 ']' 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3925156 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3925156 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3925156' 00:35:11.389 killing process with pid 3925156 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@973 -- # kill 3925156 00:35:11.389 19:12:33 keyring_file -- common/autotest_common.sh@978 -- # wait 3925156 00:35:11.958 00:35:11.958 real 0m11.712s 00:35:11.958 user 0m29.066s 00:35:11.958 sys 0m2.734s 00:35:11.958 19:12:33 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.958 19:12:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:11.958 ************************************ 00:35:11.958 END TEST keyring_file 00:35:11.958 ************************************ 00:35:11.958 19:12:34 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:11.958 19:12:34 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:11.958 19:12:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:11.958 19:12:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:11.958 19:12:34 -- common/autotest_common.sh@10 -- # set +x 00:35:11.958 ************************************ 00:35:11.958 START TEST keyring_linux 00:35:11.958 ************************************ 00:35:11.958 19:12:34 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:11.958 Joined session keyring: 767520536 00:35:11.958 * Looking for test storage... 00:35:11.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:11.958 19:12:34 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:11.958 19:12:34 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:11.958 19:12:34 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:11.958 19:12:34 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:11.958 19:12:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:11.959 19:12:34 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:11.959 19:12:34 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.959 --rc genhtml_branch_coverage=1 00:35:11.959 --rc genhtml_function_coverage=1 00:35:11.959 --rc genhtml_legend=1 00:35:11.959 --rc geninfo_all_blocks=1 00:35:11.959 --rc geninfo_unexecuted_blocks=1 00:35:11.959 00:35:11.959 ' 00:35:11.959 19:12:34 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.959 --rc genhtml_branch_coverage=1 00:35:11.959 --rc genhtml_function_coverage=1 00:35:11.959 --rc genhtml_legend=1 00:35:11.959 --rc geninfo_all_blocks=1 00:35:11.959 --rc geninfo_unexecuted_blocks=1 00:35:11.959 00:35:11.959 ' 00:35:11.959 19:12:34 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.959 --rc genhtml_branch_coverage=1 00:35:11.959 --rc genhtml_function_coverage=1 00:35:11.959 --rc genhtml_legend=1 00:35:11.959 --rc geninfo_all_blocks=1 00:35:11.959 --rc geninfo_unexecuted_blocks=1 00:35:11.959 00:35:11.959 ' 00:35:11.959 19:12:34 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.959 --rc genhtml_branch_coverage=1 00:35:11.959 --rc genhtml_function_coverage=1 00:35:11.959 --rc genhtml_legend=1 00:35:11.959 --rc geninfo_all_blocks=1 00:35:11.959 --rc geninfo_unexecuted_blocks=1 00:35:11.959 00:35:11.959 ' 00:35:11.959 19:12:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.959 19:12:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.959 19:12:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.959 19:12:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.959 19:12:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.959 19:12:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:11.959 19:12:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:11.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:11.959 19:12:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:11.959 19:12:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:11.959 19:12:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:11.959 19:12:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:11.959 19:12:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:11.959 19:12:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:11.959 19:12:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:11.959 19:12:34 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:12.219 /tmp/:spdk-test:key0 00:35:12.219 19:12:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:12.219 19:12:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:12.219 19:12:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:12.219 19:12:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:12.219 19:12:34 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:12.219 19:12:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:12.219 19:12:34 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:12.219 19:12:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:12.219 /tmp/:spdk-test:key1 00:35:12.219 19:12:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3927236 00:35:12.219 19:12:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:12.219 19:12:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3927236 00:35:12.219 19:12:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3927236 ']' 00:35:12.219 19:12:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.219 19:12:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.219 19:12:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.219 19:12:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.219 19:12:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:12.219 [2024-11-20 19:12:34.388756] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:35:12.219 [2024-11-20 19:12:34.388807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927236 ] 00:35:12.219 [2024-11-20 19:12:34.461079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.219 [2024-11-20 19:12:34.503002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:12.478 19:12:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:12.478 [2024-11-20 19:12:34.718251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.478 null0 00:35:12.478 [2024-11-20 19:12:34.750309] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:12.478 [2024-11-20 19:12:34.750678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.478 19:12:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:12.478 996551663 00:35:12.478 19:12:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:12.478 148355983 00:35:12.478 19:12:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3927241 00:35:12.478 19:12:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3927241 /var/tmp/bperf.sock 00:35:12.478 19:12:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3927241 ']' 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.478 19:12:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:12.737 [2024-11-20 19:12:34.822991] Starting SPDK v25.01-pre git sha1 bd9804982 / DPDK 24.03.0 initialization... 00:35:12.737 [2024-11-20 19:12:34.823036] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927241 ] 00:35:12.737 [2024-11-20 19:12:34.897135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.737 [2024-11-20 19:12:34.939057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.737 19:12:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.737 19:12:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:12.737 19:12:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:12.737 19:12:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:12.995 19:12:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:12.995 19:12:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:13.254 19:12:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:13.254 19:12:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:13.254 [2024-11-20 19:12:35.572835] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:13.512 nvme0n1 00:35:13.512 19:12:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:13.512 19:12:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:13.512 19:12:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:13.512 19:12:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:13.512 19:12:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:13.512 19:12:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.771 19:12:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:13.771 19:12:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:13.771 19:12:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:13.771 19:12:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.771 19:12:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:13.771 19:12:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.771 19:12:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:13.771 19:12:36 keyring_linux -- keyring/linux.sh@25 -- # sn=996551663 00:35:13.771 19:12:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:13.771 19:12:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:13.771 19:12:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 996551663 == \9\9\6\5\5\1\6\6\3 ]] 00:35:13.771 19:12:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 996551663 00:35:13.771 19:12:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:13.771 19:12:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:14.030 Running I/O for 1 seconds... 00:35:14.966 21780.00 IOPS, 85.08 MiB/s 00:35:14.966 Latency(us) 00:35:14.966 [2024-11-20T18:12:37.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.966 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:14.966 nvme0n1 : 1.01 21779.76 85.08 0.00 0.00 5857.93 3495.25 8363.64 00:35:14.966 [2024-11-20T18:12:37.291Z] =================================================================================================================== 00:35:14.966 [2024-11-20T18:12:37.291Z] Total : 21779.76 85.08 0.00 0.00 5857.93 3495.25 8363.64 00:35:14.966 { 00:35:14.966 "results": [ 00:35:14.966 { 00:35:14.966 "job": "nvme0n1", 00:35:14.966 "core_mask": "0x2", 00:35:14.966 "workload": "randread", 00:35:14.966 "status": "finished", 00:35:14.966 "queue_depth": 128, 00:35:14.966 "io_size": 4096, 00:35:14.966 "runtime": 1.005888, 00:35:14.966 "iops": 21779.760768594515, 00:35:14.966 "mibps": 85.07719050232232, 00:35:14.966 "io_failed": 0, 00:35:14.966 "io_timeout": 0, 00:35:14.966 "avg_latency_us": 5857.929921315979, 00:35:14.966 "min_latency_us": 3495.2533333333336, 00:35:14.966 "max_latency_us": 8363.641904761906 00:35:14.966 } 00:35:14.966 ], 00:35:14.966 "core_count": 1 00:35:14.966 } 00:35:14.966 19:12:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:14.966 19:12:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:15.225 19:12:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:15.225 19:12:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:15.225 19:12:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:15.225 19:12:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:15.225 19:12:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:15.225 19:12:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.484 19:12:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:15.484 19:12:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:15.484 19:12:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:15.484 19:12:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:15.484 19:12:37 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:15.484 19:12:37 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:15.484 19:12:37 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:15.484 19:12:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.484 19:12:37 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:15.484 19:12:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.484 19:12:37 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:15.484 19:12:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:15.484 [2024-11-20 19:12:37.778825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:15.484 [2024-11-20 19:12:37.779444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b3f60 (107): Transport endpoint is not connected 00:35:15.484 [2024-11-20 19:12:37.780441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b3f60 (9): Bad file descriptor 00:35:15.484 [2024-11-20 19:12:37.781442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:15.484 [2024-11-20 19:12:37.781451] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:15.484 [2024-11-20 19:12:37.781458] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:15.484 [2024-11-20 19:12:37.781466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:15.484 request: 00:35:15.484 { 00:35:15.484 "name": "nvme0", 00:35:15.484 "trtype": "tcp", 00:35:15.484 "traddr": "127.0.0.1", 00:35:15.484 "adrfam": "ipv4", 00:35:15.484 "trsvcid": "4420", 00:35:15.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.484 "prchk_reftag": false, 00:35:15.484 "prchk_guard": false, 00:35:15.484 "hdgst": false, 00:35:15.484 "ddgst": false, 00:35:15.484 "psk": ":spdk-test:key1", 00:35:15.484 "allow_unrecognized_csi": false, 00:35:15.484 "method": "bdev_nvme_attach_controller", 00:35:15.484 "req_id": 1 00:35:15.484 } 00:35:15.484 Got JSON-RPC error response 00:35:15.484 response: 00:35:15.484 { 00:35:15.484 "code": -5, 00:35:15.484 "message": "Input/output error" 00:35:15.484 } 00:35:15.743 19:12:37 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:15.743 19:12:37 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.743 19:12:37 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.743 19:12:37 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@33 -- # sn=996551663 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 996551663 00:35:15.743 1 links removed 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:15.743 19:12:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:15.744 19:12:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:15.744 19:12:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:15.744 19:12:37 keyring_linux -- keyring/linux.sh@33 -- # sn=148355983 00:35:15.744 19:12:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 148355983 00:35:15.744 1 links removed 00:35:15.744 19:12:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3927241 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3927241 ']' 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3927241 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3927241 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3927241' 00:35:15.744 killing process with pid 3927241 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 3927241 00:35:15.744 Received shutdown signal, test time was about 1.000000 seconds 00:35:15.744 00:35:15.744 Latency(us) 00:35:15.744 [2024-11-20T18:12:38.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.744 [2024-11-20T18:12:38.069Z] =================================================================================================================== 00:35:15.744 [2024-11-20T18:12:38.069Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.744 19:12:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 3927241 00:35:15.744 19:12:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3927236 00:35:15.744 19:12:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3927236 ']' 00:35:15.744 19:12:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3927236 00:35:15.744 19:12:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:15.744 19:12:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.744 19:12:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3927236 00:35:16.003 19:12:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.003 19:12:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.003 19:12:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3927236' 00:35:16.003 killing process with pid 3927236 00:35:16.003 19:12:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 3927236 00:35:16.003 19:12:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 3927236 00:35:16.261 00:35:16.261 real 0m4.343s 00:35:16.261 user 0m8.168s 00:35:16.261 sys 0m1.466s 00:35:16.261 19:12:38 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.261 19:12:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:16.261 ************************************ 00:35:16.261 END TEST keyring_linux 00:35:16.261 ************************************ 00:35:16.261 19:12:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:16.261 19:12:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:16.261 19:12:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:16.261 19:12:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:16.261 19:12:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:16.261 19:12:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:16.261 19:12:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:16.261 19:12:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:16.261 19:12:38 -- common/autotest_common.sh@10 -- # set +x 00:35:16.262 19:12:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:16.262 19:12:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:16.262 19:12:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:16.262 19:12:38 -- common/autotest_common.sh@10 -- # set +x 00:35:21.534 INFO: APP EXITING 00:35:21.534 INFO: killing all VMs 00:35:21.534 INFO: killing vhost app 00:35:21.534 INFO: EXIT DONE 00:35:24.069 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:24.069 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:24.069 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:24.069 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:24.069 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:24.069 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:24.069 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:24.070 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:27.362 Cleaning 00:35:27.363 Removing: /var/run/dpdk/spdk0/config 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:27.363 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:27.363 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:27.363 Removing: /var/run/dpdk/spdk1/config 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:27.363 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:27.363 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:27.363 Removing: /var/run/dpdk/spdk2/config 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:27.363 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:27.363 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:27.363 Removing: /var/run/dpdk/spdk3/config 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:27.363 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:27.363 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:27.363 Removing: /var/run/dpdk/spdk4/config 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:27.363 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:27.363 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:27.363 Removing: /dev/shm/bdev_svc_trace.1 00:35:27.363 Removing: /dev/shm/nvmf_trace.0 00:35:27.363 Removing: /dev/shm/spdk_tgt_trace.pid3446719 00:35:27.363 Removing: /var/run/dpdk/spdk0 00:35:27.363 Removing: /var/run/dpdk/spdk1 00:35:27.363 Removing: /var/run/dpdk/spdk2 00:35:27.363 Removing: /var/run/dpdk/spdk3 00:35:27.363 Removing: /var/run/dpdk/spdk4 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3444339 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3445503 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3446719 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3447354 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3448310 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3448528 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3449518 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3449524 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3449877 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3451600 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3452905 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3453235 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3453491 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3453793 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3454085 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3454335 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3454587 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3454869 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3455611 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3458609 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3458865 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3459119 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3459130 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3459624 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3459636 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3460122 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3460136 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3460565 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3460672 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3461003 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3461026 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3461585 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3462069 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3462519 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3466239 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3470632 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3480740 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3481310 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3485653 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3485961 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3490241 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3496139 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3498740 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3509047 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3518600 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3520320 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3521296 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3538238 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3542170 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3587373 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3592721 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3598537 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3605046 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3605053 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3605965 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3606880 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3607606 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3608258 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3608263 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3608540 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3608789 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3608831 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3610132 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3610864 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3611770 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3612438 00:35:27.363 Removing: /var/run/dpdk/spdk_pid3612459 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3612692 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3613710 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3614694 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3623004 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3652134 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3656648 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3658257 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3660085 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3660128 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3660341 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3660569 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3661065 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3662777 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3663684 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3664184 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3666297 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3666773 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3667498 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3671664 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3677284 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3677285 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3677286 00:35:27.364 Removing: /var/run/dpdk/spdk_pid3681173 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3690056 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3694073 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3700289 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3701595 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3702926 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3704461 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3708959 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3713301 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3717328 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3724925 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3724929 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3729643 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3729874 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3730104 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3730563 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3730572 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3735276 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3735833 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3740709 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3743463 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3748750 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3754202 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3762991 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3769986 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3770043 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3789277 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3789753 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3790398 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3790920 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3791674 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3792148 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3792686 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3793318 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3797380 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3797654 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3803657 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3803932 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3809395 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3813633 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3823371 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3824049 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3828386 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3828687 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3833315 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3838959 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3841758 00:35:27.623 Removing: /var/run/dpdk/spdk_pid3851702 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3860370 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3862144 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3863029 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3879663 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3883579 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3886269 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3894458 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3894463 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3899536 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3901467 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3903433 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3904638 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3906666 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3907725 00:35:27.624 Removing: /var/run/dpdk/spdk_pid3916472 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3916934 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3917392 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3920012 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3920606 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3921330 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3925156 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3925160 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3926681 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3927236 00:35:27.883 Removing: /var/run/dpdk/spdk_pid3927241 00:35:27.883 Clean 00:35:27.883 19:12:50 -- common/autotest_common.sh@1453 -- # return 0 00:35:27.883 19:12:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:27.883 19:12:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.883 19:12:50 -- common/autotest_common.sh@10 -- # set +x 00:35:27.883 19:12:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:27.883 19:12:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.883 19:12:50 -- common/autotest_common.sh@10 -- # set +x 00:35:27.883 19:12:50 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:27.883 19:12:50 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:27.883 19:12:50 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:27.883 19:12:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:27.883 19:12:50 -- spdk/autotest.sh@398 -- # hostname 00:35:27.883 19:12:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:28.142 geninfo: WARNING: invalid characters removed from testname! 00:35:50.082 19:13:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:51.461 19:13:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:53.366 19:13:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:55.272 19:13:17 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:56.653 19:13:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:58.558 19:13:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.462 19:13:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:00.462 19:13:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:00.462 19:13:22 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:00.462 19:13:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:00.462 19:13:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:00.462 19:13:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:00.462 + [[ -n 3366836 ]] 00:36:00.463 + sudo kill 3366836 00:36:00.472 [Pipeline] } 00:36:00.486 [Pipeline] // stage 00:36:00.490 [Pipeline] } 00:36:00.503 [Pipeline] // timeout 00:36:00.508 [Pipeline] } 00:36:00.519 [Pipeline] // catchError 00:36:00.523 [Pipeline] } 00:36:00.537 [Pipeline] // wrap 00:36:00.542 [Pipeline] } 00:36:00.554 [Pipeline] // catchError 00:36:00.563 [Pipeline] stage 00:36:00.565 [Pipeline] { (Epilogue) 00:36:00.577 [Pipeline] catchError 00:36:00.578 [Pipeline] { 00:36:00.589 [Pipeline] echo 00:36:00.590 Cleanup processes 00:36:00.595 [Pipeline] sh 00:36:00.879 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.879 3937960 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.892 [Pipeline] sh 00:36:01.254 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:01.254 ++ grep -v 'sudo pgrep' 00:36:01.254 ++ awk '{print $1}' 00:36:01.254 + sudo kill -9 00:36:01.254 + true 00:36:01.282 [Pipeline] sh 00:36:01.564 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:13.783 [Pipeline] sh 00:36:14.064 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:14.064 Artifacts sizes are good 00:36:14.078 [Pipeline] archiveArtifacts 00:36:14.086 Archiving artifacts 00:36:14.258 [Pipeline] sh 00:36:14.539 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:14.550 [Pipeline] cleanWs 00:36:14.558 [WS-CLEANUP] Deleting project workspace... 00:36:14.558 [WS-CLEANUP] Deferred wipeout is used... 00:36:14.564 [WS-CLEANUP] done 00:36:14.565 [Pipeline] } 00:36:14.582 [Pipeline] // catchError 00:36:14.596 [Pipeline] sh 00:36:14.877 + logger -p user.info -t JENKINS-CI 00:36:14.885 [Pipeline] } 00:36:14.900 [Pipeline] // stage 00:36:14.906 [Pipeline] } 00:36:14.920 [Pipeline] // node 00:36:14.924 [Pipeline] End of Pipeline 00:36:14.958 Finished: SUCCESS